Test Report: KVM_Linux_crio 18421

                    
                      715903ea5b86ab0a28d26e6fe572bd5327dfa9fc:2024-03-18:33639
                    
                

Test fail (29/325)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.94
47 TestAddons/parallel/LocalPath 19.36
53 TestAddons/StoppedEnableDisable 154.22
172 TestMultiControlPlane/serial/StopSecondaryNode 142.01
174 TestMultiControlPlane/serial/RestartSecondaryNode 58.66
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 410.74
179 TestMultiControlPlane/serial/StopCluster 141.97
239 TestMultiNode/serial/RestartKeepsNodes 312.49
241 TestMultiNode/serial/StopMultiNode 141.49
248 TestPreload 250.42
256 TestKubernetesUpgrade 422.01
328 TestStartStop/group/old-k8s-version/serial/FirstStart 291.5
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.18
356 TestStartStop/group/embed-certs/serial/Stop 139.02
359 TestStartStop/group/no-preload/serial/Stop 139.17
360 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
361 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 110.94
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
369 TestStartStop/group/old-k8s-version/serial/SecondStart 750.97
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.54
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.22
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.26
374 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.43
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 359.95
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 332.61
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 347.74
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 101.48
x
+
TestAddons/parallel/Ingress (155.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-791443 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-791443 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-791443 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c02a476d-3fe2-4ed7-8f9f-166a582aa95e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c02a476d-3fe2-4ed7-8f9f-166a582aa95e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003214994s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791443 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.648920638s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-791443 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.131
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 addons disable ingress-dns --alsologtostderr -v=1: (1.260524141s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 addons disable ingress --alsologtostderr -v=1: (7.850045638s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-791443 -n addons-791443
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 logs -n 25: (1.395090543s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-022655                                                                     | download-only-022655 | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC | 18 Mar 24 20:31 UTC |
	| delete  | -p download-only-029911                                                                     | download-only-029911 | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC | 18 Mar 24 20:31 UTC |
	| delete  | -p download-only-820089                                                                     | download-only-820089 | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC | 18 Mar 24 20:31 UTC |
	| delete  | -p download-only-022655                                                                     | download-only-022655 | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC | 18 Mar 24 20:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-766908 | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC |                     |
	|         | binary-mirror-766908                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36299                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-766908                                                                     | binary-mirror-766908 | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC | 18 Mar 24 20:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC |                     |
	|         | addons-791443                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC |                     |
	|         | addons-791443                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-791443 --wait=true                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:31 UTC | 18 Mar 24 20:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | addons-791443                                                                               |                      |         |         |                     |                     |
	| addons  | addons-791443 addons                                                                        | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-791443 ssh cat                                                                       | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | /opt/local-path-provisioner/pvc-227ad095-46e9-4689-9ab7-b7b7ca5fbdc2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-791443 addons disable                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | -p addons-791443                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-791443 ip                                                                            | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	| addons  | addons-791443 addons disable                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-791443 addons disable                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | addons-791443                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:35 UTC |
	|         | -p addons-791443                                                                            |                      |         |         |                     |                     |
	| addons  | addons-791443 addons                                                                        | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:35 UTC | 18 Mar 24 20:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-791443 ssh curl -s                                                                   | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-791443 addons                                                                        | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:36 UTC | 18 Mar 24 20:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-791443 ip                                                                            | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:38 UTC | 18 Mar 24 20:38 UTC |
	| addons  | addons-791443 addons disable                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:38 UTC | 18 Mar 24 20:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-791443 addons disable                                                                | addons-791443        | jenkins | v1.32.0 | 18 Mar 24 20:38 UTC | 18 Mar 24 20:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:31:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:31:37.521201   13597 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:31:37.521304   13597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:31:37.521313   13597 out.go:304] Setting ErrFile to fd 2...
	I0318 20:31:37.521317   13597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:31:37.521491   13597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:31:37.522046   13597 out.go:298] Setting JSON to false
	I0318 20:31:37.522794   13597 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":841,"bootTime":1710793056,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:31:37.522845   13597 start.go:139] virtualization: kvm guest
	I0318 20:31:37.524798   13597 out.go:177] * [addons-791443] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:31:37.525945   13597 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:31:37.525997   13597 notify.go:220] Checking for updates...
	I0318 20:31:37.527174   13597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:31:37.528504   13597 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:31:37.529735   13597 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:31:37.530782   13597 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:31:37.532068   13597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:31:37.533264   13597 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:31:37.562682   13597 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 20:31:37.563891   13597 start.go:297] selected driver: kvm2
	I0318 20:31:37.563900   13597 start.go:901] validating driver "kvm2" against <nil>
	I0318 20:31:37.563910   13597 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:31:37.564517   13597 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:31:37.564588   13597 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:31:37.578494   13597 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:31:37.578527   13597 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 20:31:37.578723   13597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:31:37.578781   13597 cni.go:84] Creating CNI manager for ""
	I0318 20:31:37.578793   13597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 20:31:37.578803   13597 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 20:31:37.578847   13597 start.go:340] cluster config:
	{Name:addons-791443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-791443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:31:37.578931   13597 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:31:37.580588   13597 out.go:177] * Starting "addons-791443" primary control-plane node in "addons-791443" cluster
	I0318 20:31:37.581807   13597 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:31:37.581831   13597 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:31:37.581844   13597 cache.go:56] Caching tarball of preloaded images
	I0318 20:31:37.581905   13597 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:31:37.581915   13597 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:31:37.582207   13597 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/config.json ...
	I0318 20:31:37.582232   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/config.json: {Name:mk7602b51e2898fc124b924d0fbeaf1875ecd927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:31:37.582343   13597 start.go:360] acquireMachinesLock for addons-791443: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:31:37.582388   13597 start.go:364] duration metric: took 33.269µs to acquireMachinesLock for "addons-791443"
	I0318 20:31:37.582403   13597 start.go:93] Provisioning new machine with config: &{Name:addons-791443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-791443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:31:37.582459   13597 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 20:31:37.583924   13597 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 20:31:37.584034   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:31:37.584073   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:31:37.597074   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0318 20:31:37.597450   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:31:37.598010   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:31:37.598030   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:31:37.598386   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:31:37.598584   13597 main.go:141] libmachine: (addons-791443) Calling .GetMachineName
	I0318 20:31:37.598767   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:31:37.598935   13597 start.go:159] libmachine.API.Create for "addons-791443" (driver="kvm2")
	I0318 20:31:37.598960   13597 client.go:168] LocalClient.Create starting
	I0318 20:31:37.599001   13597 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:31:37.799309   13597 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:31:37.877628   13597 main.go:141] libmachine: Running pre-create checks...
	I0318 20:31:37.877650   13597 main.go:141] libmachine: (addons-791443) Calling .PreCreateCheck
	I0318 20:31:37.878109   13597 main.go:141] libmachine: (addons-791443) Calling .GetConfigRaw
	I0318 20:31:37.878522   13597 main.go:141] libmachine: Creating machine...
	I0318 20:31:37.878539   13597 main.go:141] libmachine: (addons-791443) Calling .Create
	I0318 20:31:37.878673   13597 main.go:141] libmachine: (addons-791443) Creating KVM machine...
	I0318 20:31:37.879796   13597 main.go:141] libmachine: (addons-791443) DBG | found existing default KVM network
	I0318 20:31:37.880479   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:37.880350   13619 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0318 20:31:37.880501   13597 main.go:141] libmachine: (addons-791443) DBG | created network xml: 
	I0318 20:31:37.880513   13597 main.go:141] libmachine: (addons-791443) DBG | <network>
	I0318 20:31:37.880521   13597 main.go:141] libmachine: (addons-791443) DBG |   <name>mk-addons-791443</name>
	I0318 20:31:37.880527   13597 main.go:141] libmachine: (addons-791443) DBG |   <dns enable='no'/>
	I0318 20:31:37.880531   13597 main.go:141] libmachine: (addons-791443) DBG |   
	I0318 20:31:37.880539   13597 main.go:141] libmachine: (addons-791443) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 20:31:37.880545   13597 main.go:141] libmachine: (addons-791443) DBG |     <dhcp>
	I0318 20:31:37.880553   13597 main.go:141] libmachine: (addons-791443) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 20:31:37.880560   13597 main.go:141] libmachine: (addons-791443) DBG |     </dhcp>
	I0318 20:31:37.880566   13597 main.go:141] libmachine: (addons-791443) DBG |   </ip>
	I0318 20:31:37.880570   13597 main.go:141] libmachine: (addons-791443) DBG |   
	I0318 20:31:37.880576   13597 main.go:141] libmachine: (addons-791443) DBG | </network>
	I0318 20:31:37.880581   13597 main.go:141] libmachine: (addons-791443) DBG | 
	I0318 20:31:37.885727   13597 main.go:141] libmachine: (addons-791443) DBG | trying to create private KVM network mk-addons-791443 192.168.39.0/24...
	I0318 20:31:37.945997   13597 main.go:141] libmachine: (addons-791443) DBG | private KVM network mk-addons-791443 192.168.39.0/24 created
	I0318 20:31:37.946038   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:37.945962   13619 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:31:37.946058   13597 main.go:141] libmachine: (addons-791443) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443 ...
	I0318 20:31:37.946075   13597 main.go:141] libmachine: (addons-791443) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:31:37.946099   13597 main.go:141] libmachine: (addons-791443) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:31:38.201812   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:38.201719   13619 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa...
	I0318 20:31:38.530598   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:38.530466   13619 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/addons-791443.rawdisk...
	I0318 20:31:38.530638   13597 main.go:141] libmachine: (addons-791443) DBG | Writing magic tar header
	I0318 20:31:38.530654   13597 main.go:141] libmachine: (addons-791443) DBG | Writing SSH key tar header
	I0318 20:31:38.530667   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:38.530613   13619 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443 ...
	I0318 20:31:38.530813   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443
	I0318 20:31:38.530848   13597 main.go:141] libmachine: (addons-791443) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443 (perms=drwx------)
	I0318 20:31:38.530875   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:31:38.530886   13597 main.go:141] libmachine: (addons-791443) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:31:38.530901   13597 main.go:141] libmachine: (addons-791443) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:31:38.530910   13597 main.go:141] libmachine: (addons-791443) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:31:38.530923   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:31:38.530932   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:31:38.530946   13597 main.go:141] libmachine: (addons-791443) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:31:38.530954   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:31:38.530962   13597 main.go:141] libmachine: (addons-791443) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:31:38.530975   13597 main.go:141] libmachine: (addons-791443) Creating domain...
	I0318 20:31:38.530988   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:31:38.530996   13597 main.go:141] libmachine: (addons-791443) DBG | Checking permissions on dir: /home
	I0318 20:31:38.531010   13597 main.go:141] libmachine: (addons-791443) DBG | Skipping /home - not owner
	I0318 20:31:38.531859   13597 main.go:141] libmachine: (addons-791443) define libvirt domain using xml: 
	I0318 20:31:38.531895   13597 main.go:141] libmachine: (addons-791443) <domain type='kvm'>
	I0318 20:31:38.531907   13597 main.go:141] libmachine: (addons-791443)   <name>addons-791443</name>
	I0318 20:31:38.531939   13597 main.go:141] libmachine: (addons-791443)   <memory unit='MiB'>4000</memory>
	I0318 20:31:38.531951   13597 main.go:141] libmachine: (addons-791443)   <vcpu>2</vcpu>
	I0318 20:31:38.531959   13597 main.go:141] libmachine: (addons-791443)   <features>
	I0318 20:31:38.531972   13597 main.go:141] libmachine: (addons-791443)     <acpi/>
	I0318 20:31:38.531980   13597 main.go:141] libmachine: (addons-791443)     <apic/>
	I0318 20:31:38.531987   13597 main.go:141] libmachine: (addons-791443)     <pae/>
	I0318 20:31:38.531991   13597 main.go:141] libmachine: (addons-791443)     
	I0318 20:31:38.531999   13597 main.go:141] libmachine: (addons-791443)   </features>
	I0318 20:31:38.532009   13597 main.go:141] libmachine: (addons-791443)   <cpu mode='host-passthrough'>
	I0318 20:31:38.532018   13597 main.go:141] libmachine: (addons-791443)   
	I0318 20:31:38.532030   13597 main.go:141] libmachine: (addons-791443)   </cpu>
	I0318 20:31:38.532039   13597 main.go:141] libmachine: (addons-791443)   <os>
	I0318 20:31:38.532047   13597 main.go:141] libmachine: (addons-791443)     <type>hvm</type>
	I0318 20:31:38.532058   13597 main.go:141] libmachine: (addons-791443)     <boot dev='cdrom'/>
	I0318 20:31:38.532066   13597 main.go:141] libmachine: (addons-791443)     <boot dev='hd'/>
	I0318 20:31:38.532074   13597 main.go:141] libmachine: (addons-791443)     <bootmenu enable='no'/>
	I0318 20:31:38.532081   13597 main.go:141] libmachine: (addons-791443)   </os>
	I0318 20:31:38.532089   13597 main.go:141] libmachine: (addons-791443)   <devices>
	I0318 20:31:38.532105   13597 main.go:141] libmachine: (addons-791443)     <disk type='file' device='cdrom'>
	I0318 20:31:38.532122   13597 main.go:141] libmachine: (addons-791443)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/boot2docker.iso'/>
	I0318 20:31:38.532133   13597 main.go:141] libmachine: (addons-791443)       <target dev='hdc' bus='scsi'/>
	I0318 20:31:38.532145   13597 main.go:141] libmachine: (addons-791443)       <readonly/>
	I0318 20:31:38.532155   13597 main.go:141] libmachine: (addons-791443)     </disk>
	I0318 20:31:38.532166   13597 main.go:141] libmachine: (addons-791443)     <disk type='file' device='disk'>
	I0318 20:31:38.532182   13597 main.go:141] libmachine: (addons-791443)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:31:38.532218   13597 main.go:141] libmachine: (addons-791443)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/addons-791443.rawdisk'/>
	I0318 20:31:38.532233   13597 main.go:141] libmachine: (addons-791443)       <target dev='hda' bus='virtio'/>
	I0318 20:31:38.532241   13597 main.go:141] libmachine: (addons-791443)     </disk>
	I0318 20:31:38.532253   13597 main.go:141] libmachine: (addons-791443)     <interface type='network'>
	I0318 20:31:38.532264   13597 main.go:141] libmachine: (addons-791443)       <source network='mk-addons-791443'/>
	I0318 20:31:38.532269   13597 main.go:141] libmachine: (addons-791443)       <model type='virtio'/>
	I0318 20:31:38.532275   13597 main.go:141] libmachine: (addons-791443)     </interface>
	I0318 20:31:38.532280   13597 main.go:141] libmachine: (addons-791443)     <interface type='network'>
	I0318 20:31:38.532288   13597 main.go:141] libmachine: (addons-791443)       <source network='default'/>
	I0318 20:31:38.532293   13597 main.go:141] libmachine: (addons-791443)       <model type='virtio'/>
	I0318 20:31:38.532301   13597 main.go:141] libmachine: (addons-791443)     </interface>
	I0318 20:31:38.532309   13597 main.go:141] libmachine: (addons-791443)     <serial type='pty'>
	I0318 20:31:38.532332   13597 main.go:141] libmachine: (addons-791443)       <target port='0'/>
	I0318 20:31:38.532352   13597 main.go:141] libmachine: (addons-791443)     </serial>
	I0318 20:31:38.532358   13597 main.go:141] libmachine: (addons-791443)     <console type='pty'>
	I0318 20:31:38.532368   13597 main.go:141] libmachine: (addons-791443)       <target type='serial' port='0'/>
	I0318 20:31:38.532377   13597 main.go:141] libmachine: (addons-791443)     </console>
	I0318 20:31:38.532382   13597 main.go:141] libmachine: (addons-791443)     <rng model='virtio'>
	I0318 20:31:38.532390   13597 main.go:141] libmachine: (addons-791443)       <backend model='random'>/dev/random</backend>
	I0318 20:31:38.532397   13597 main.go:141] libmachine: (addons-791443)     </rng>
	I0318 20:31:38.532402   13597 main.go:141] libmachine: (addons-791443)     
	I0318 20:31:38.532408   13597 main.go:141] libmachine: (addons-791443)     
	I0318 20:31:38.532414   13597 main.go:141] libmachine: (addons-791443)   </devices>
	I0318 20:31:38.532424   13597 main.go:141] libmachine: (addons-791443) </domain>
	I0318 20:31:38.532433   13597 main.go:141] libmachine: (addons-791443) 
	I0318 20:31:38.537756   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:9e:05:92 in network default
	I0318 20:31:38.538250   13597 main.go:141] libmachine: (addons-791443) Ensuring networks are active...
	I0318 20:31:38.538264   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:38.538749   13597 main.go:141] libmachine: (addons-791443) Ensuring network default is active
	I0318 20:31:38.539013   13597 main.go:141] libmachine: (addons-791443) Ensuring network mk-addons-791443 is active
	I0318 20:31:38.539451   13597 main.go:141] libmachine: (addons-791443) Getting domain xml...
	I0318 20:31:38.539995   13597 main.go:141] libmachine: (addons-791443) Creating domain...
	I0318 20:31:39.860839   13597 main.go:141] libmachine: (addons-791443) Waiting to get IP...
	I0318 20:31:39.861732   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:39.862123   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:39.862161   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:39.862111   13619 retry.go:31] will retry after 310.582127ms: waiting for machine to come up
	I0318 20:31:40.175278   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:40.175742   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:40.175766   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:40.175692   13619 retry.go:31] will retry after 313.716233ms: waiting for machine to come up
	I0318 20:31:40.491303   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:40.491776   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:40.491801   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:40.491730   13619 retry.go:31] will retry after 295.958898ms: waiting for machine to come up
	I0318 20:31:40.789271   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:40.789790   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:40.789818   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:40.789751   13619 retry.go:31] will retry after 500.590659ms: waiting for machine to come up
	I0318 20:31:41.292477   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:41.292865   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:41.292908   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:41.292815   13619 retry.go:31] will retry after 660.707917ms: waiting for machine to come up
	I0318 20:31:41.954942   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:41.955378   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:41.955396   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:41.955339   13619 retry.go:31] will retry after 850.787535ms: waiting for machine to come up
	I0318 20:31:42.807795   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:42.808186   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:42.808208   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:42.808146   13619 retry.go:31] will retry after 761.555078ms: waiting for machine to come up
	I0318 20:31:43.571554   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:43.571956   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:43.571990   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:43.571927   13619 retry.go:31] will retry after 1.304163553s: waiting for machine to come up
	I0318 20:31:44.878390   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:44.878797   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:44.878825   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:44.878740   13619 retry.go:31] will retry after 1.615292799s: waiting for machine to come up
	I0318 20:31:46.496427   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:46.496820   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:46.496854   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:46.496770   13619 retry.go:31] will retry after 1.688552406s: waiting for machine to come up
	I0318 20:31:48.186355   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:48.186646   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:48.186672   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:48.186625   13619 retry.go:31] will retry after 1.823534408s: waiting for machine to come up
	I0318 20:31:50.011272   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:50.011773   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:50.011798   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:50.011732   13619 retry.go:31] will retry after 2.796678473s: waiting for machine to come up
	I0318 20:31:52.809899   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:52.810171   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:52.810202   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:52.810143   13619 retry.go:31] will retry after 2.920120371s: waiting for machine to come up
	I0318 20:31:55.733841   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:31:55.734176   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find current IP address of domain addons-791443 in network mk-addons-791443
	I0318 20:31:55.734201   13597 main.go:141] libmachine: (addons-791443) DBG | I0318 20:31:55.734135   13619 retry.go:31] will retry after 4.371740776s: waiting for machine to come up
	I0318 20:32:00.106963   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:00.107310   13597 main.go:141] libmachine: (addons-791443) Found IP for machine: 192.168.39.131
	I0318 20:32:00.107350   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has current primary IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:00.107361   13597 main.go:141] libmachine: (addons-791443) Reserving static IP address...
	I0318 20:32:00.107644   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find host DHCP lease matching {name: "addons-791443", mac: "52:54:00:64:22:51", ip: "192.168.39.131"} in network mk-addons-791443
	I0318 20:32:00.175198   13597 main.go:141] libmachine: (addons-791443) DBG | Getting to WaitForSSH function...
	I0318 20:32:00.175229   13597 main.go:141] libmachine: (addons-791443) Reserved static IP address: 192.168.39.131
	I0318 20:32:00.175243   13597 main.go:141] libmachine: (addons-791443) Waiting for SSH to be available...
	I0318 20:32:00.177681   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:00.178044   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443
	I0318 20:32:00.178073   13597 main.go:141] libmachine: (addons-791443) DBG | unable to find defined IP address of network mk-addons-791443 interface with MAC address 52:54:00:64:22:51
	I0318 20:32:00.178281   13597 main.go:141] libmachine: (addons-791443) DBG | Using SSH client type: external
	I0318 20:32:00.178305   13597 main.go:141] libmachine: (addons-791443) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa (-rw-------)
	I0318 20:32:00.178333   13597 main.go:141] libmachine: (addons-791443) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:32:00.178357   13597 main.go:141] libmachine: (addons-791443) DBG | About to run SSH command:
	I0318 20:32:00.178368   13597 main.go:141] libmachine: (addons-791443) DBG | exit 0
	I0318 20:32:00.189690   13597 main.go:141] libmachine: (addons-791443) DBG | SSH cmd err, output: exit status 255: 
	I0318 20:32:00.189714   13597 main.go:141] libmachine: (addons-791443) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 20:32:00.189722   13597 main.go:141] libmachine: (addons-791443) DBG | command : exit 0
	I0318 20:32:00.189726   13597 main.go:141] libmachine: (addons-791443) DBG | err     : exit status 255
	I0318 20:32:00.189760   13597 main.go:141] libmachine: (addons-791443) DBG | output  : 
	I0318 20:32:03.190388   13597 main.go:141] libmachine: (addons-791443) DBG | Getting to WaitForSSH function...
	I0318 20:32:03.192617   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.192894   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.192940   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.193076   13597 main.go:141] libmachine: (addons-791443) DBG | Using SSH client type: external
	I0318 20:32:03.193090   13597 main.go:141] libmachine: (addons-791443) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa (-rw-------)
	I0318 20:32:03.193115   13597 main.go:141] libmachine: (addons-791443) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:32:03.193129   13597 main.go:141] libmachine: (addons-791443) DBG | About to run SSH command:
	I0318 20:32:03.193140   13597 main.go:141] libmachine: (addons-791443) DBG | exit 0
	I0318 20:32:03.316982   13597 main.go:141] libmachine: (addons-791443) DBG | SSH cmd err, output: <nil>: 
	I0318 20:32:03.317213   13597 main.go:141] libmachine: (addons-791443) KVM machine creation complete!
	I0318 20:32:03.317500   13597 main.go:141] libmachine: (addons-791443) Calling .GetConfigRaw
	I0318 20:32:03.318009   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:03.318211   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:03.318353   13597 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:32:03.318369   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:03.319819   13597 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:32:03.319841   13597 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:32:03.319848   13597 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:32:03.319856   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:03.321820   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.322112   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.322137   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.322275   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:03.322449   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.322608   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.322735   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:03.322898   13597 main.go:141] libmachine: Using SSH client type: native
	I0318 20:32:03.323099   13597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0318 20:32:03.323111   13597 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:32:03.424985   13597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:32:03.425006   13597 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:32:03.425014   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:03.427688   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.428034   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.428062   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.428194   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:03.428417   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.428582   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.428707   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:03.428876   13597 main.go:141] libmachine: Using SSH client type: native
	I0318 20:32:03.429090   13597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0318 20:32:03.429104   13597 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:32:03.529998   13597 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:32:03.530076   13597 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:32:03.530097   13597 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:32:03.530111   13597 main.go:141] libmachine: (addons-791443) Calling .GetMachineName
	I0318 20:32:03.530311   13597 buildroot.go:166] provisioning hostname "addons-791443"
	I0318 20:32:03.530333   13597 main.go:141] libmachine: (addons-791443) Calling .GetMachineName
	I0318 20:32:03.530492   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:03.532730   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.533049   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.533075   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.533224   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:03.533387   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.533521   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.533645   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:03.533769   13597 main.go:141] libmachine: Using SSH client type: native
	I0318 20:32:03.533915   13597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0318 20:32:03.533927   13597 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-791443 && echo "addons-791443" | sudo tee /etc/hostname
	I0318 20:32:03.653286   13597 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-791443
	
	I0318 20:32:03.653317   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:03.655964   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.656287   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.656308   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.656430   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:03.656602   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.656784   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.656949   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:03.657106   13597 main.go:141] libmachine: Using SSH client type: native
	I0318 20:32:03.657259   13597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0318 20:32:03.657275   13597 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-791443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-791443/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-791443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:32:03.767173   13597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:32:03.767204   13597 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:32:03.767240   13597 buildroot.go:174] setting up certificates
	I0318 20:32:03.767250   13597 provision.go:84] configureAuth start
	I0318 20:32:03.767260   13597 main.go:141] libmachine: (addons-791443) Calling .GetMachineName
	I0318 20:32:03.767538   13597 main.go:141] libmachine: (addons-791443) Calling .GetIP
	I0318 20:32:03.769707   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.770041   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.770072   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.770197   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:03.771938   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.772219   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.772266   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.772390   13597 provision.go:143] copyHostCerts
	I0318 20:32:03.772471   13597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:32:03.772605   13597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:32:03.772696   13597 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:32:03.772759   13597 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.addons-791443 san=[127.0.0.1 192.168.39.131 addons-791443 localhost minikube]
	I0318 20:32:03.969984   13597 provision.go:177] copyRemoteCerts
	I0318 20:32:03.970036   13597 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:32:03.970056   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:03.972539   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.972898   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:03.972946   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:03.973094   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:03.973265   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:03.973419   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:03.973568   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:04.055542   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:32:04.081648   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 20:32:04.107616   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 20:32:04.133464   13597 provision.go:87] duration metric: took 366.201502ms to configureAuth
	I0318 20:32:04.133508   13597 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:32:04.133676   13597 config.go:182] Loaded profile config "addons-791443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:32:04.133774   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:04.136384   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.136709   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.136740   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.136865   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:04.137091   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.137293   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.137463   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:04.137640   13597 main.go:141] libmachine: Using SSH client type: native
	I0318 20:32:04.137793   13597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0318 20:32:04.137806   13597 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:32:04.414969   13597 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:32:04.415000   13597 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:32:04.415010   13597 main.go:141] libmachine: (addons-791443) Calling .GetURL
	I0318 20:32:04.416420   13597 main.go:141] libmachine: (addons-791443) DBG | Using libvirt version 6000000
	I0318 20:32:04.419025   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.419383   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.419423   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.419587   13597 main.go:141] libmachine: Docker is up and running!
	I0318 20:32:04.419602   13597 main.go:141] libmachine: Reticulating splines...
	I0318 20:32:04.419609   13597 client.go:171] duration metric: took 26.820641676s to LocalClient.Create
	I0318 20:32:04.419630   13597 start.go:167] duration metric: took 26.820696395s to libmachine.API.Create "addons-791443"
	I0318 20:32:04.419644   13597 start.go:293] postStartSetup for "addons-791443" (driver="kvm2")
	I0318 20:32:04.419657   13597 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:32:04.419680   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:04.419903   13597 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:32:04.419927   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:04.422264   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.422594   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.422629   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.422770   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:04.422956   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.423101   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:04.423226   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:04.504090   13597 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:32:04.509027   13597 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:32:04.509050   13597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:32:04.509125   13597 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:32:04.509160   13597 start.go:296] duration metric: took 89.508749ms for postStartSetup
	I0318 20:32:04.509200   13597 main.go:141] libmachine: (addons-791443) Calling .GetConfigRaw
	I0318 20:32:04.509755   13597 main.go:141] libmachine: (addons-791443) Calling .GetIP
	I0318 20:32:04.512036   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.512424   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.512453   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.512719   13597 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/config.json ...
	I0318 20:32:04.512883   13597 start.go:128] duration metric: took 26.930415074s to createHost
	I0318 20:32:04.512924   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:04.514957   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.515270   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.515296   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.515478   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:04.515655   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.515848   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.515986   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:04.516142   13597 main.go:141] libmachine: Using SSH client type: native
	I0318 20:32:04.516309   13597 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.131 22 <nil> <nil>}
	I0318 20:32:04.516322   13597 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:32:04.618065   13597 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710793924.606929248
	
	I0318 20:32:04.618090   13597 fix.go:216] guest clock: 1710793924.606929248
	I0318 20:32:04.618100   13597 fix.go:229] Guest: 2024-03-18 20:32:04.606929248 +0000 UTC Remote: 2024-03-18 20:32:04.512893804 +0000 UTC m=+27.036037567 (delta=94.035444ms)
	I0318 20:32:04.618120   13597 fix.go:200] guest clock delta is within tolerance: 94.035444ms
	I0318 20:32:04.618128   13597 start.go:83] releasing machines lock for "addons-791443", held for 27.035729083s
	I0318 20:32:04.618154   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:04.618393   13597 main.go:141] libmachine: (addons-791443) Calling .GetIP
	I0318 20:32:04.620703   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.621079   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.621109   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.621204   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:04.621615   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:04.621800   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:04.621877   13597 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:32:04.621921   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:04.622025   13597 ssh_runner.go:195] Run: cat /version.json
	I0318 20:32:04.622046   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:04.624636   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.624933   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.625014   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.625040   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.625154   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:04.625302   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:04.625322   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.625322   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:04.625488   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:04.625496   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:04.625684   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:04.625678   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:04.625827   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:04.625975   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:04.727162   13597 ssh_runner.go:195] Run: systemctl --version
	I0318 20:32:04.733633   13597 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:32:04.907145   13597 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:32:04.913592   13597 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:32:04.913647   13597 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:32:04.935187   13597 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:32:04.935210   13597 start.go:494] detecting cgroup driver to use...
	I0318 20:32:04.935266   13597 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:32:04.958431   13597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:32:04.976186   13597 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:32:04.976235   13597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:32:04.993632   13597 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:32:05.008404   13597 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:32:05.125955   13597 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:32:05.291081   13597 docker.go:233] disabling docker service ...
	I0318 20:32:05.291153   13597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:32:05.307266   13597 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:32:05.320980   13597 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:32:05.439392   13597 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:32:05.565327   13597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:32:05.581109   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:32:05.601883   13597 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:32:05.601941   13597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.613154   13597 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:32:05.613242   13597 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.624199   13597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.635313   13597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.646302   13597 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:32:05.657238   13597 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.668041   13597 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.686741   13597 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:32:05.697482   13597 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:32:05.707189   13597 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:32:05.707242   13597 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:32:05.721557   13597 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:32:05.731132   13597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:32:05.853431   13597 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:32:06.014786   13597 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:32:06.014870   13597 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:32:06.020229   13597 start.go:562] Will wait 60s for crictl version
	I0318 20:32:06.020290   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:32:06.024644   13597 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:32:06.064516   13597 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:32:06.064628   13597 ssh_runner.go:195] Run: crio --version
	I0318 20:32:06.093795   13597 ssh_runner.go:195] Run: crio --version
	I0318 20:32:06.125568   13597 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:32:06.126587   13597 main.go:141] libmachine: (addons-791443) Calling .GetIP
	I0318 20:32:06.128930   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:06.129316   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:06.129351   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:06.129572   13597 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:32:06.136237   13597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:32:06.151173   13597 kubeadm.go:877] updating cluster {Name:addons-791443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-791443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 20:32:06.151266   13597 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:32:06.151306   13597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:32:06.185955   13597 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 20:32:06.186019   13597 ssh_runner.go:195] Run: which lz4
	I0318 20:32:06.190442   13597 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 20:32:06.195056   13597 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 20:32:06.195080   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 20:32:07.972467   13597 crio.go:462] duration metric: took 1.7820488s to copy over tarball
	I0318 20:32:07.972538   13597 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 20:32:10.861607   13597 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.889015013s)
	I0318 20:32:10.861644   13597 crio.go:469] duration metric: took 2.889149962s to extract the tarball
	I0318 20:32:10.861654   13597 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 20:32:10.904484   13597 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:32:10.954664   13597 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:32:10.954688   13597 cache_images.go:84] Images are preloaded, skipping loading
	I0318 20:32:10.954698   13597 kubeadm.go:928] updating node { 192.168.39.131 8443 v1.28.4 crio true true} ...
	I0318 20:32:10.954829   13597 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-791443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-791443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:32:10.954890   13597 ssh_runner.go:195] Run: crio config
	I0318 20:32:11.004570   13597 cni.go:84] Creating CNI manager for ""
	I0318 20:32:11.004597   13597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 20:32:11.004610   13597 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 20:32:11.004628   13597 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.131 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-791443 NodeName:addons-791443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 20:32:11.004775   13597 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.131
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-791443"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.131
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.131"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 20:32:11.004834   13597 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:32:11.015711   13597 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 20:32:11.015767   13597 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 20:32:11.026501   13597 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 20:32:11.047017   13597 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:32:11.066972   13597 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0318 20:32:11.085730   13597 ssh_runner.go:195] Run: grep 192.168.39.131	control-plane.minikube.internal$ /etc/hosts
	I0318 20:32:11.089972   13597 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.131	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:32:11.102745   13597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:32:11.224153   13597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:32:11.243034   13597 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443 for IP: 192.168.39.131
	I0318 20:32:11.243055   13597 certs.go:194] generating shared ca certs ...
	I0318 20:32:11.243069   13597 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:11.243203   13597 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:32:11.449957   13597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt ...
	I0318 20:32:11.449988   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt: {Name:mkbe99957d6b3641103f3612d46b6c9399c3de1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:11.450164   13597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key ...
	I0318 20:32:11.450180   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key: {Name:mke08325205acdaa111a7a73a92b335eb76ad116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:11.450276   13597 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:32:11.881072   13597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt ...
	I0318 20:32:11.881103   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt: {Name:mk7b132d03a5c9af5bc29473977be7cee1582f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:11.881265   13597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key ...
	I0318 20:32:11.881276   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key: {Name:mk93409927ec28f4ce324691e9b2d69947c154d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:11.881342   13597 certs.go:256] generating profile certs ...
	I0318 20:32:11.881401   13597 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.key
	I0318 20:32:11.881416   13597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt with IP's: []
	I0318 20:32:12.200186   13597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt ...
	I0318 20:32:12.200216   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: {Name:mk239b17e7479abe75d6c4830e45a01dc073004d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:12.200373   13597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.key ...
	I0318 20:32:12.200384   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.key: {Name:mk21f59a77a8fa55e14762d8a79d3ca3ca64937f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:12.200452   13597 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.key.e426da55
	I0318 20:32:12.200470   13597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.crt.e426da55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.131]
	I0318 20:32:12.376440   13597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.crt.e426da55 ...
	I0318 20:32:12.376469   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.crt.e426da55: {Name:mkfce6d739a93ef542aa717f901b2842d054d84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:12.376613   13597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.key.e426da55 ...
	I0318 20:32:12.376626   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.key.e426da55: {Name:mkd0fb079980819aa0fe5488ca6f16361f7daf19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:12.376703   13597 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.crt.e426da55 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.crt
	I0318 20:32:12.376773   13597 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.key.e426da55 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.key
	I0318 20:32:12.376819   13597 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.key
	I0318 20:32:12.376835   13597 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.crt with IP's: []
	I0318 20:32:12.477946   13597 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.crt ...
	I0318 20:32:12.477974   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.crt: {Name:mk93fa67225e89fe8cbf35229be42f77f67ae240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:12.478132   13597 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.key ...
	I0318 20:32:12.478142   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.key: {Name:mk78dd0d8f701cbaecce20deee10c480d55d2a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:12.478294   13597 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:32:12.478325   13597 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:32:12.478345   13597 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:32:12.478369   13597 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:32:12.478938   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:32:12.508892   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:32:12.542859   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:32:12.755232   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:32:12.785860   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 20:32:12.811789   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:32:12.836991   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:32:12.862471   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 20:32:12.888479   13597 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:32:12.913583   13597 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 20:32:12.931280   13597 ssh_runner.go:195] Run: openssl version
	I0318 20:32:12.937760   13597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:32:12.950320   13597 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:32:12.955408   13597 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:32:12.955465   13597 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:32:12.961870   13597 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:32:12.974807   13597 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:32:12.979570   13597 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:32:12.979630   13597 kubeadm.go:391] StartCluster: {Name:addons-791443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-791443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:32:12.979721   13597 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 20:32:12.979762   13597 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 20:32:13.020552   13597 cri.go:89] found id: ""
	I0318 20:32:13.020611   13597 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 20:32:13.032143   13597 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 20:32:13.042964   13597 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 20:32:13.053729   13597 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 20:32:13.053745   13597 kubeadm.go:156] found existing configuration files:
	
	I0318 20:32:13.053787   13597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 20:32:13.064108   13597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 20:32:13.064151   13597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 20:32:13.075222   13597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 20:32:13.085571   13597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 20:32:13.085615   13597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 20:32:13.096511   13597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 20:32:13.106917   13597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 20:32:13.106962   13597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 20:32:13.117741   13597 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 20:32:13.128166   13597 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 20:32:13.128221   13597 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 20:32:13.138799   13597 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 20:32:13.193206   13597 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 20:32:13.193275   13597 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 20:32:13.334629   13597 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 20:32:13.334761   13597 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 20:32:13.334873   13597 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 20:32:13.621238   13597 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 20:32:13.684298   13597 out.go:204]   - Generating certificates and keys ...
	I0318 20:32:13.684420   13597 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 20:32:13.684511   13597 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 20:32:13.750298   13597 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 20:32:13.879275   13597 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 20:32:13.969307   13597 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 20:32:14.195961   13597 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 20:32:14.384711   13597 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 20:32:14.384922   13597 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-791443 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	I0318 20:32:14.449225   13597 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 20:32:14.449360   13597 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-791443 localhost] and IPs [192.168.39.131 127.0.0.1 ::1]
	I0318 20:32:14.603817   13597 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 20:32:14.813004   13597 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 20:32:15.084051   13597 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 20:32:15.084167   13597 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 20:32:15.388632   13597 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 20:32:15.653587   13597 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 20:32:15.817570   13597 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 20:32:15.930550   13597 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 20:32:15.932474   13597 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 20:32:15.935066   13597 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 20:32:15.937247   13597 out.go:204]   - Booting up control plane ...
	I0318 20:32:15.937391   13597 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 20:32:15.937498   13597 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 20:32:15.937599   13597 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 20:32:15.952752   13597 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 20:32:15.953733   13597 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 20:32:15.953781   13597 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 20:32:16.086214   13597 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 20:32:21.588518   13597 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502860 seconds
	I0318 20:32:21.588664   13597 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 20:32:21.619388   13597 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 20:32:22.150569   13597 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 20:32:22.150801   13597 kubeadm.go:309] [mark-control-plane] Marking the node addons-791443 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 20:32:22.664995   13597 kubeadm.go:309] [bootstrap-token] Using token: voceof.d1xjwdz7w3sk7sxn
	I0318 20:32:22.666377   13597 out.go:204]   - Configuring RBAC rules ...
	I0318 20:32:22.666539   13597 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 20:32:22.675640   13597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 20:32:22.683838   13597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 20:32:22.687674   13597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 20:32:22.691087   13597 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 20:32:22.696764   13597 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 20:32:22.710898   13597 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 20:32:22.939259   13597 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 20:32:23.082888   13597 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 20:32:23.085283   13597 kubeadm.go:309] 
	I0318 20:32:23.085340   13597 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 20:32:23.085350   13597 kubeadm.go:309] 
	I0318 20:32:23.085468   13597 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 20:32:23.085483   13597 kubeadm.go:309] 
	I0318 20:32:23.085513   13597 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 20:32:23.085617   13597 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 20:32:23.085704   13597 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 20:32:23.085717   13597 kubeadm.go:309] 
	I0318 20:32:23.085787   13597 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 20:32:23.085802   13597 kubeadm.go:309] 
	I0318 20:32:23.085884   13597 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 20:32:23.085900   13597 kubeadm.go:309] 
	I0318 20:32:23.085980   13597 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 20:32:23.086089   13597 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 20:32:23.086186   13597 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 20:32:23.086202   13597 kubeadm.go:309] 
	I0318 20:32:23.086316   13597 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 20:32:23.086437   13597 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 20:32:23.086447   13597 kubeadm.go:309] 
	I0318 20:32:23.086548   13597 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token voceof.d1xjwdz7w3sk7sxn \
	I0318 20:32:23.086698   13597 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 20:32:23.086750   13597 kubeadm.go:309] 	--control-plane 
	I0318 20:32:23.086760   13597 kubeadm.go:309] 
	I0318 20:32:23.086871   13597 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 20:32:23.086885   13597 kubeadm.go:309] 
	I0318 20:32:23.086994   13597 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token voceof.d1xjwdz7w3sk7sxn \
	I0318 20:32:23.087138   13597 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 20:32:23.088928   13597 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 20:32:23.088948   13597 cni.go:84] Creating CNI manager for ""
	I0318 20:32:23.088956   13597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 20:32:23.090947   13597 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 20:32:23.092388   13597 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 20:32:23.107237   13597 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 20:32:23.170239   13597 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 20:32:23.170398   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-791443 minikube.k8s.io/updated_at=2024_03_18T20_32_23_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=addons-791443 minikube.k8s.io/primary=true
	I0318 20:32:23.170417   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:23.196276   13597 ops.go:34] apiserver oom_adj: -16
	I0318 20:32:23.303277   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:23.804028   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:24.303725   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:24.803739   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:25.303867   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:25.804131   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:26.304099   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:26.803735   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:27.304133   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:27.803957   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:28.303763   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:28.803706   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:29.303614   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:29.804119   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:30.304383   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:30.803653   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:31.303472   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:31.804119   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:32.303998   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:32.803363   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:33.303965   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:33.804039   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:34.304001   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:34.803639   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:35.303588   13597 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:32:35.421406   13597 kubeadm.go:1107] duration metric: took 12.251060033s to wait for elevateKubeSystemPrivileges
	W0318 20:32:35.421440   13597 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 20:32:35.421450   13597 kubeadm.go:393] duration metric: took 22.441824616s to StartCluster
	I0318 20:32:35.421471   13597 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:35.421585   13597 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:32:35.421924   13597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:32:35.422124   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 20:32:35.422136   13597 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.131 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:32:35.424253   13597 out.go:177] * Verifying Kubernetes components...
	I0318 20:32:35.422233   13597 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 20:32:35.422335   13597 config.go:182] Loaded profile config "addons-791443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:32:35.425795   13597 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:32:35.424319   13597 addons.go:69] Setting default-storageclass=true in profile "addons-791443"
	I0318 20:32:35.425849   13597 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-791443"
	I0318 20:32:35.424326   13597 addons.go:69] Setting yakd=true in profile "addons-791443"
	I0318 20:32:35.424329   13597 addons.go:69] Setting cloud-spanner=true in profile "addons-791443"
	I0318 20:32:35.424336   13597 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-791443"
	I0318 20:32:35.424338   13597 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-791443"
	I0318 20:32:35.424335   13597 addons.go:69] Setting metrics-server=true in profile "addons-791443"
	I0318 20:32:35.424347   13597 addons.go:69] Setting ingress=true in profile "addons-791443"
	I0318 20:32:35.424346   13597 addons.go:69] Setting volumesnapshots=true in profile "addons-791443"
	I0318 20:32:35.424349   13597 addons.go:69] Setting ingress-dns=true in profile "addons-791443"
	I0318 20:32:35.424351   13597 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-791443"
	I0318 20:32:35.424351   13597 addons.go:69] Setting gcp-auth=true in profile "addons-791443"
	I0318 20:32:35.424355   13597 addons.go:69] Setting inspektor-gadget=true in profile "addons-791443"
	I0318 20:32:35.424363   13597 addons.go:69] Setting registry=true in profile "addons-791443"
	I0318 20:32:35.424360   13597 addons.go:69] Setting storage-provisioner=true in profile "addons-791443"
	I0318 20:32:35.424371   13597 addons.go:69] Setting helm-tiller=true in profile "addons-791443"
	I0318 20:32:35.425921   13597 addons.go:234] Setting addon helm-tiller=true in "addons-791443"
	I0318 20:32:35.425941   13597 addons.go:234] Setting addon cloud-spanner=true in "addons-791443"
	I0318 20:32:35.425956   13597 addons.go:234] Setting addon yakd=true in "addons-791443"
	I0318 20:32:35.425966   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.425975   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.425986   13597 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-791443"
	I0318 20:32:35.425989   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.426011   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.426241   13597 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-791443"
	I0318 20:32:35.426303   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426334   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426346   13597 mustload.go:65] Loading cluster: addons-791443
	I0318 20:32:35.426362   13597 addons.go:234] Setting addon volumesnapshots=true in "addons-791443"
	I0318 20:32:35.426366   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426370   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426385   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.426387   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426386   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426467   13597 addons.go:234] Setting addon registry=true in "addons-791443"
	I0318 20:32:35.426491   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.426529   13597 config.go:182] Loaded profile config "addons-791443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:32:35.426699   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426722   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426729   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426766   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426816   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426834   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.425936   13597 addons.go:234] Setting addon metrics-server=true in "addons-791443"
	I0318 20:32:35.426865   13597 addons.go:234] Setting addon storage-provisioner=true in "addons-791443"
	I0318 20:32:35.426872   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.426891   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.427155   13597 addons.go:234] Setting addon ingress-dns=true in "addons-791443"
	I0318 20:32:35.427172   13597 addons.go:234] Setting addon inspektor-gadget=true in "addons-791443"
	I0318 20:32:35.427185   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.427187   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.427192   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.427196   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.427214   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.427224   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426349   13597 addons.go:234] Setting addon ingress=true in "addons-791443"
	I0318 20:32:35.427472   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.427527   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.427552   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.426843   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.427581   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.427593   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.426334   13597 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-791443"
	I0318 20:32:35.426310   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.427696   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.427705   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.427989   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.428010   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.428045   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.428050   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.428076   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.446995   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0318 20:32:35.447228   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38701
	I0318 20:32:35.447777   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.447913   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.448364   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.448387   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.448541   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.448560   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.448731   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.448884   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.449325   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.449365   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.450075   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.450115   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.450491   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
	I0318 20:32:35.450874   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.451323   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.451340   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.451677   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.451957   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.455097   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0318 20:32:35.456169   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.456997   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.457016   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.457405   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.457543   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.458390   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0318 20:32:35.458784   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.459218   13597 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-791443"
	I0318 20:32:35.459271   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.459478   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.459497   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.459631   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.459668   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.461374   13597 addons.go:234] Setting addon default-storageclass=true in "addons-791443"
	I0318 20:32:35.461414   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.461489   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.461522   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.461741   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.461768   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.468284   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.468360   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
	I0318 20:32:35.468466   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
	I0318 20:32:35.469825   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.469931   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.470340   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0318 20:32:35.470353   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.470387   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.470636   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.470979   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.470997   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.471158   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.471170   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.471408   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.471829   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.471845   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.471973   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.472489   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.472520   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.473045   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.473605   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.473641   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.474280   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.474310   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.480479   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0318 20:32:35.480593   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0318 20:32:35.481008   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.481259   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.481503   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.481524   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.481929   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.481950   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.481999   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.482276   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.482325   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.482435   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.484920   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.487194   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 20:32:35.485536   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.488751   13597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 20:32:35.488764   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 20:32:35.488783   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.491188   13597 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 20:32:35.491794   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I0318 20:32:35.492982   13597 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 20:32:35.492997   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 20:32:35.493014   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.492553   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0318 20:32:35.492578   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44647
	I0318 20:32:35.493942   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.494026   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.494261   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.494618   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.494744   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.494757   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.494986   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.495002   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.495066   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.495170   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.495485   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.495485   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.495594   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.495610   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.495666   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.495086   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.495806   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.495890   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.495908   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.495920   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.496006   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.496027   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.496553   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.497148   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.497182   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.498537   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.498956   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.498974   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.499195   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.499385   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.499544   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.499686   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.519124   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0318 20:32:35.519157   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I0318 20:32:35.519285   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0318 20:32:35.519496   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.519622   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.520014   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.520027   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.520073   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.520509   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.520527   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.520584   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.520650   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.520673   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.520981   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.521013   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.521037   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.521184   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.521659   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.521695   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.522923   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0318 20:32:35.524107   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.524214   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.524344   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.527121   13597 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 20:32:35.528789   13597 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 20:32:35.525223   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.525254   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0318 20:32:35.526865   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0318 20:32:35.527053   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I0318 20:32:35.529872   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0318 20:32:35.530857   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.530939   13597 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 20:32:35.531553   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.531609   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.531635   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.532851   13597 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 20:32:35.532864   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 20:32:35.532883   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.533053   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 20:32:35.533071   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.534244   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.534265   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.534247   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.534316   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.534340   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.534356   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.534373   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.534625   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.534647   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.534701   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.534977   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0318 20:32:35.535043   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.535183   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.535196   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.535376   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.535410   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.535532   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.535589   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.535921   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.535951   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.536035   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.536040   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0318 20:32:35.536052   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.536500   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.536582   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.536643   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.537189   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.537206   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.537605   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I0318 20:32:35.537717   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.537995   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.538061   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.538588   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.538605   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.538963   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.539024   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:35.539414   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.539443   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.539639   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.539698   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.539716   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.539734   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.539836   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.539855   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.540048   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.540236   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.540381   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.540586   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0318 20:32:35.540867   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38595
	I0318 20:32:35.540958   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.541014   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.541221   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.543693   13597 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 20:32:35.541667   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.542581   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.542614   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.542969   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.542979   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.543059   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.543673   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.546697   13597 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 20:32:35.545362   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.545426   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.545532   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.545558   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.546078   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.547376   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I0318 20:32:35.548326   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.548417   13597 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 20:32:35.549053   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.549771   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 20:32:35.549154   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.549718   13597 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 20:32:35.549820   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.549911   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 20:32:35.550139   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.550170   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.550314   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.551505   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 20:32:35.551736   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.552064   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.552087   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.552165   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.553799   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 20:32:35.553887   13597 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 20:32:35.554000   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.555336   13597 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 20:32:35.556628   13597 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 20:32:35.556643   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 20:32:35.556659   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.555459   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.555525   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 20:32:35.556739   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:35.557876   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.558170   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 20:32:35.558189   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.558311   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.558674   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.559583   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0318 20:32:35.559717   13597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 20:32:35.561212   13597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 20:32:35.562926   13597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 20:32:35.562585   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0318 20:32:35.559865   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.564654   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.560103   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.566036   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 20:32:35.560314   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.560340   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.560522   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.559843   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:35.563035   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.563414   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0318 20:32:35.563510   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.564078   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.564689   13597 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 20:32:35.564874   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.566228   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.567331   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.567334   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.567351   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.567362   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.569299   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 20:32:35.567425   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 20:32:35.568120   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.568135   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.568133   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.568250   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.568349   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.568732   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.568953   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.570784   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.570812   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.572553   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 20:32:35.570815   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.571136   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.571164   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.571249   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.571262   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.573578   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.574030   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.574040   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.574329   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.574999   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.575008   13597 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 20:32:35.575025   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.575167   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.575184   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.576283   13597 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 20:32:35.577743   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 20:32:35.577760   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 20:32:35.577777   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.576407   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.579347   13597 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 20:32:35.579358   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 20:32:35.579369   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.576794   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.576827   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0318 20:32:35.577216   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.577313   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.577897   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.579809   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.581290   13597 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0318 20:32:35.583177   13597 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 20:32:35.583190   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 20:32:35.583199   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.581433   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.580122   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.580483   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.580990   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.583455   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.583474   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.579962   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.581593   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.582424   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.583541   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.583558   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.582992   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.583719   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.583858   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.583869   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.583997   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.584113   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.584284   13597 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 20:32:35.584294   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 20:32:35.584304   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.584348   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.584380   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.584482   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.584580   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.584668   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.585796   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0318 20:32:35.586203   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.586208   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:35.588221   13597 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 20:32:35.586632   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:35.587926   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.589169   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.589686   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.589698   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.589704   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:35.589715   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.589553   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.589728   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.589739   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.589802   13597 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 20:32:35.589809   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 20:32:35.589817   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.589908   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.589956   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.590048   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:35.590102   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.590117   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.590199   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:35.590256   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.590263   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.591932   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:35.594894   13597 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 20:32:35.593036   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.594923   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.594939   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.593558   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.596559   13597 out.go:177]   - Using image docker.io/busybox:stable
	I0318 20:32:35.595110   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.598096   13597 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 20:32:35.598116   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 20:32:35.598136   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:35.598178   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.598337   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:35.601231   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.601598   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:35.601608   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:35.601792   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:35.601936   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:35.602055   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:35.602140   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	W0318 20:32:35.604014   13597 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57386->192.168.39.131:22: read: connection reset by peer
	I0318 20:32:35.604032   13597 retry.go:31] will retry after 207.772892ms: ssh: handshake failed: read tcp 192.168.39.1:57386->192.168.39.131:22: read: connection reset by peer
	I0318 20:32:35.912420   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 20:32:35.912448   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 20:32:35.936288   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 20:32:35.974318   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 20:32:35.986991   13597 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 20:32:35.987008   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 20:32:36.023351   13597 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 20:32:36.023381   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 20:32:36.047079   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 20:32:36.049718   13597 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 20:32:36.049733   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 20:32:36.081964   13597 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 20:32:36.081985   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 20:32:36.138162   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 20:32:36.138185   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 20:32:36.153791   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 20:32:36.166090   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 20:32:36.173379   13597 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 20:32:36.173403   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 20:32:36.180810   13597 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 20:32:36.180828   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 20:32:36.182539   13597 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 20:32:36.182553   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 20:32:36.238569   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 20:32:36.242312   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 20:32:36.271279   13597 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 20:32:36.271296   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 20:32:36.289488   13597 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 20:32:36.289505   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 20:32:36.313958   13597 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:32:36.314157   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 20:32:36.423099   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 20:32:36.437160   13597 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 20:32:36.437181   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 20:32:36.468515   13597 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 20:32:36.468535   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 20:32:36.480252   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 20:32:36.480275   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 20:32:36.483008   13597 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 20:32:36.483023   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 20:32:36.504107   13597 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 20:32:36.504131   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 20:32:36.615335   13597 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 20:32:36.615363   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 20:32:36.664718   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 20:32:36.729874   13597 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 20:32:36.729897   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 20:32:36.756039   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 20:32:36.756064   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 20:32:36.756384   13597 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 20:32:36.756407   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 20:32:36.981187   13597 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 20:32:36.981215   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 20:32:36.998948   13597 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 20:32:36.998974   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 20:32:37.114092   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 20:32:37.114120   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 20:32:37.118914   13597 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 20:32:37.118930   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 20:32:37.136384   13597 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 20:32:37.136395   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 20:32:37.346673   13597 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 20:32:37.346691   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 20:32:37.387659   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 20:32:37.482936   13597 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 20:32:37.482955   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 20:32:37.489717   13597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 20:32:37.489738   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 20:32:37.573283   13597 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 20:32:37.573304   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 20:32:37.745835   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 20:32:37.871714   13597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 20:32:37.871741   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 20:32:37.932389   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 20:32:38.154997   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 20:32:38.341403   13597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 20:32:38.341435   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 20:32:38.683151   13597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 20:32:38.683180   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 20:32:39.019171   13597 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 20:32:39.019193   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 20:32:39.570174   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 20:32:42.164292   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.227967421s)
	I0318 20:32:42.164329   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.189981539s)
	I0318 20:32:42.164349   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.117245325s)
	I0318 20:32:42.164362   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164370   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164376   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164381   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164397   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.010584234s)
	I0318 20:32:42.164411   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164424   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164370   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164483   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164623   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.164646   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.164647   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.164655   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.164667   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.164674   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164681   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164683   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.164697   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.164708   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.164720   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.164727   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.164734   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164740   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164772   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.164778   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164788   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.164796   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.164798   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164805   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.164986   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.165022   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.165040   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.165045   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.165048   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.165055   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.166350   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.166378   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.166385   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.166563   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.166607   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.166620   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.282290   13597 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 20:32:42.282322   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:42.285287   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:42.285687   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:42.285715   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:42.285910   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:42.286084   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:42.286234   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:42.286440   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:42.436028   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:42.436058   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:42.436416   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:42.436418   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:42.436448   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:42.504063   13597 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 20:32:42.569912   13597 addons.go:234] Setting addon gcp-auth=true in "addons-791443"
	I0318 20:32:42.569966   13597 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:32:42.570324   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:42.570352   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:42.584762   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I0318 20:32:42.585173   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:42.585612   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:42.585635   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:42.585971   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:42.586559   13597 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:32:42.586591   13597 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:32:42.600548   13597 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40725
	I0318 20:32:42.601006   13597 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:32:42.601482   13597 main.go:141] libmachine: Using API Version  1
	I0318 20:32:42.601507   13597 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:32:42.601827   13597 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:32:42.601993   13597 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:32:42.603511   13597 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:32:42.603713   13597 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 20:32:42.603734   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:32:42.606288   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:42.606700   13597 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:32:42.606723   13597 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:32:42.606889   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:32:42.607058   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:32:42.607212   13597 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:32:42.607333   13597 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:32:45.889422   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.650821909s)
	I0318 20:32:45.889541   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.889555   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.889558   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.647208754s)
	I0318 20:32:45.889567   13597 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.575587478s)
	I0318 20:32:45.889614   13597 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.575433351s)
	I0318 20:32:45.889643   13597 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 20:32:45.889665   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.466538746s)
	I0318 20:32:45.889683   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.889695   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.889762   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.225012873s)
	I0318 20:32:45.889798   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.889811   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.889927   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.502240695s)
	I0318 20:32:45.889949   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.889959   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890027   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.144166309s)
	I0318 20:32:45.890054   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890072   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.889600   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890124   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890260   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.735226544s)
	I0318 20:32:45.890283   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890293   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890404   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.890435   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.890444   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.890455   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890463   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890491   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.957764811s)
	I0318 20:32:45.890509   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	W0318 20:32:45.890523   13597 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 20:32:45.890544   13597 retry.go:31] will retry after 283.236278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 20:32:45.890570   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.890548   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.890580   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.890589   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890591   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.890595   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.890602   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890606   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.890607   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890610   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890614   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890622   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890723   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.890753   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.890761   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.890769   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.890777   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.890526   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.890925   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.890954   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.890962   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.890558   13597 node_ready.go:35] waiting up to 6m0s for node "addons-791443" to be "Ready" ...
	I0318 20:32:45.891117   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.891137   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.891178   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.891188   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.891201   13597 addons.go:470] Verifying addon registry=true in "addons-791443"
	I0318 20:32:45.893835   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.893839   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.893848   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.893909   13597 out.go:177] * Verifying registry addon...
	I0318 20:32:45.891856   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.893993   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.891877   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.725764672s)
	I0318 20:32:45.894084   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.894094   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.891884   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.891901   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.894163   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.894203   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.894228   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.891926   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.891963   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.894290   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.894300   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.894309   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.892434   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.894348   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.892453   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.894382   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.894420   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.894439   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.895923   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.894448   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.895936   13597 addons.go:470] Verifying addon metrics-server=true in "addons-791443"
	I0318 20:32:45.895991   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.896010   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.896018   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.894519   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.894503   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.896068   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.897373   13597 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-791443 service yakd-dashboard -n yakd-dashboard
	
	I0318 20:32:45.896213   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.896233   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.896684   13597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 20:32:45.898555   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.898575   13597 addons.go:470] Verifying addon ingress=true in "addons-791443"
	I0318 20:32:45.899874   13597 out.go:177] * Verifying ingress addon...
	I0318 20:32:45.901577   13597 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 20:32:45.921460   13597 node_ready.go:49] node "addons-791443" has status "Ready":"True"
	I0318 20:32:45.921493   13597 node_ready.go:38] duration metric: took 30.502331ms for node "addons-791443" to be "Ready" ...
	I0318 20:32:45.921505   13597 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:32:45.928421   13597 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 20:32:45.928445   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:45.937377   13597 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 20:32:45.937397   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:45.942382   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:45.942402   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:45.942660   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:45.942679   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:45.942680   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:45.947287   13597 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8jcwf" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:45.997561   13597 pod_ready.go:92] pod "coredns-5dd5756b68-8jcwf" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:45.997581   13597 pod_ready.go:81] duration metric: took 50.272866ms for pod "coredns-5dd5756b68-8jcwf" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:45.997660   13597 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gvtmf" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.031651   13597 pod_ready.go:92] pod "coredns-5dd5756b68-gvtmf" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:46.031673   13597 pod_ready.go:81] duration metric: took 34.005447ms for pod "coredns-5dd5756b68-gvtmf" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.031682   13597 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.062253   13597 pod_ready.go:92] pod "etcd-addons-791443" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:46.062274   13597 pod_ready.go:81] duration metric: took 30.585615ms for pod "etcd-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.062283   13597 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.109328   13597 pod_ready.go:92] pod "kube-apiserver-addons-791443" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:46.109349   13597 pod_ready.go:81] duration metric: took 47.05945ms for pod "kube-apiserver-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.109360   13597 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.174466   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 20:32:46.302285   13597 pod_ready.go:92] pod "kube-controller-manager-addons-791443" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:46.302306   13597 pod_ready.go:81] duration metric: took 192.940489ms for pod "kube-controller-manager-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.302319   13597 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4wrfg" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.397531   13597 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-791443" context rescaled to 1 replicas
	I0318 20:32:46.414394   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:46.418849   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:46.699396   13597 pod_ready.go:92] pod "kube-proxy-4wrfg" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:46.699419   13597 pod_ready.go:81] duration metric: took 397.094008ms for pod "kube-proxy-4wrfg" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.699431   13597 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:46.951577   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:46.958478   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:47.160597   13597 pod_ready.go:92] pod "kube-scheduler-addons-791443" in "kube-system" namespace has status "Ready":"True"
	I0318 20:32:47.160635   13597 pod_ready.go:81] duration metric: took 461.191951ms for pod "kube-scheduler-addons-791443" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:47.160650   13597 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace to be "Ready" ...
	I0318 20:32:47.422222   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:47.436005   13597 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.832270816s)
	I0318 20:32:47.437638   13597 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 20:32:47.436236   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.866013177s)
	I0318 20:32:47.437682   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:47.437695   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:47.438004   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:47.438043   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:47.439309   13597 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 20:32:47.440721   13597 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 20:32:47.440739   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0318 20:32:47.439322   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:47.440783   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:47.440812   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:47.441142   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:47.441158   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:47.441170   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:47.441181   13597 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-791443"
	I0318 20:32:47.443389   13597 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 20:32:47.441469   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:47.445061   13597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 20:32:47.478813   13597 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 20:32:47.478834   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:47.502616   13597 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 20:32:47.502640   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 20:32:47.580252   13597 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 20:32:47.580270   13597 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 20:32:47.627635   13597 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 20:32:47.906198   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:47.909983   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:47.950715   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:48.407913   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:48.408060   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:48.450819   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:48.905945   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:48.920614   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:48.954349   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:49.139066   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.964547333s)
	I0318 20:32:49.139128   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:49.139142   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:49.139433   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:49.139453   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:49.139463   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:49.139471   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:49.139437   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:49.139784   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:49.139795   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:49.139799   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:49.167937   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:32:49.436421   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:49.453717   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:49.491139   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:49.565235   13597 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.937562627s)
	I0318 20:32:49.565288   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:49.565308   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:49.565570   13597 main.go:141] libmachine: (addons-791443) DBG | Closing plugin on server side
	I0318 20:32:49.565637   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:49.565652   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:49.565665   13597 main.go:141] libmachine: Making call to close driver server
	I0318 20:32:49.565674   13597 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:32:49.565988   13597 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:32:49.566006   13597 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:32:49.568091   13597 addons.go:470] Verifying addon gcp-auth=true in "addons-791443"
	I0318 20:32:49.569711   13597 out.go:177] * Verifying gcp-auth addon...
	I0318 20:32:49.571828   13597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 20:32:49.586290   13597 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 20:32:49.586309   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:49.923360   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:49.927388   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:49.969768   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:50.080461   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:50.404315   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:50.412633   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:50.450820   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:50.576221   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:50.903754   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:50.908325   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:50.951590   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:51.077582   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:51.403690   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:51.406756   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:51.450864   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:51.575933   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:51.666275   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:32:51.905408   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:51.907267   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:51.958508   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:52.076849   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:52.404678   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:52.407492   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:52.451032   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:52.576165   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:52.907900   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:52.909324   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:52.951359   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:53.076396   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:53.571683   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:53.575266   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:53.575793   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:53.583025   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:53.667032   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:32:53.906500   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:53.909085   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:53.951541   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:54.075605   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:54.406521   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:54.415465   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:54.453243   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:54.577025   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:54.904000   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:54.909640   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:54.951970   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:55.076461   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:55.404917   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:55.406506   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:55.451373   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:55.575888   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:55.671867   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:32:55.906522   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:55.906811   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:55.950765   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:56.075900   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:56.406139   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:56.407588   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:56.450520   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:56.576433   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:56.910541   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:56.911721   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:56.951310   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:57.078264   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:57.405127   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:57.407280   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:57.451158   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:57.578422   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:57.905984   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:57.909320   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:57.952048   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:58.075975   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:58.167231   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:32:58.407765   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:58.410219   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:58.452282   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:58.576515   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:58.903943   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:58.906625   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:58.951659   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:59.075652   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:59.403523   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:59.406897   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:59.450458   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:32:59.575749   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:32:59.904276   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:32:59.906566   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:32:59.950992   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:00.075763   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:00.404783   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:00.415287   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:00.457711   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:00.576312   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:00.667358   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:00.904195   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:00.916324   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:00.954252   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:01.076126   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:01.403689   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:01.405802   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:01.520213   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:01.575925   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:01.912371   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:01.915262   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:01.952054   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:02.075941   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:02.405057   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:02.407328   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:02.451546   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:02.577236   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:02.671893   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:02.909385   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:02.914502   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:02.950658   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:03.077981   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:03.407263   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:03.411728   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:03.452483   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:03.576587   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:03.902602   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:03.905818   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:03.949985   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:04.076013   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:04.403890   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:04.406347   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:04.450782   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:04.576117   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:04.914002   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:04.915957   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:04.950486   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:05.075157   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:05.166414   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:05.404652   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:05.407270   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:05.451632   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:05.576695   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:05.903661   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:05.907140   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:05.950796   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:06.076277   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:06.405156   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:06.406788   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:06.451123   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:06.577259   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:06.907587   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:06.911673   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:06.950854   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:07.078570   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:07.167851   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:07.405375   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:07.410062   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:07.450631   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:07.576216   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:07.987898   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:07.989232   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:07.998122   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:08.088664   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:08.405898   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:08.407284   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:08.450335   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:08.576981   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:08.903081   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:08.906061   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:08.951505   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:09.076028   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:09.404837   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:09.406974   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:09.454500   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:09.575483   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:09.667716   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:09.909941   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:09.915011   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:09.950942   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:10.077097   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:10.404745   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:10.405787   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:10.452013   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:10.576755   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:10.904580   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:10.907656   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:10.950608   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:11.094526   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:11.403286   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:11.405958   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:11.450956   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:11.575366   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:11.904543   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:11.907110   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:11.950384   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:12.171436   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:12.178468   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:12.403640   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:12.406691   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:12.451937   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:12.576178   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:12.904877   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:12.906384   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:12.951001   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:13.075663   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:13.404396   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:13.405914   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:13.450711   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:13.576770   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:13.905485   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:13.907852   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:13.950795   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:14.076267   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:14.406939   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:14.407488   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:14.450079   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:14.576543   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:14.668375   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:14.906260   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:14.907604   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:14.951818   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:15.076294   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:15.403656   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:15.407345   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:15.451117   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:15.576370   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:15.906028   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:15.907826   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:15.950768   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:16.076877   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:16.404231   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:16.407846   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:16.451943   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:16.576042   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:16.903978   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:16.907370   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:16.953131   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:17.075988   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:17.166819   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:17.403580   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:17.407190   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:17.450655   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:17.868383   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:17.905734   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:17.906494   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:17.951600   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:18.076440   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:18.403603   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:18.407375   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:18.451426   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:18.576154   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:18.904134   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:18.905767   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:18.952826   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:19.076521   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:19.167612   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:19.403552   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:19.405945   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:19.450620   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:19.575702   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:19.903545   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:19.906516   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:19.951378   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:20.076345   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:20.404703   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:20.407938   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:20.450377   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:20.576347   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:20.903756   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:20.906097   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:20.951010   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:21.083538   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:21.168030   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:21.404129   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:21.408405   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:21.454987   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:21.576387   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:21.911969   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:21.912305   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:21.951569   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:22.077019   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:22.406950   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:22.410336   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:22.450311   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:22.577481   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:22.903956   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 20:33:22.905968   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:22.950534   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:23.074953   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:23.404341   13597 kapi.go:107] duration metric: took 37.507660693s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 20:33:23.406090   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:23.451043   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:23.576006   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:23.668440   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:23.906953   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:23.950668   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:24.077855   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:24.406673   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:24.450671   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:24.576299   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:24.907685   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:24.952476   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:25.077601   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:25.406243   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:25.453730   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:25.576232   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:25.668944   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:25.906637   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:25.950930   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:26.075725   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:26.566822   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:26.568829   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:26.577049   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:26.906855   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:26.950771   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:27.076127   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:27.407262   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:27.450991   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:27.576270   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:27.674925   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:27.906437   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:27.959543   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:28.076336   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:28.408240   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:28.450947   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:28.577121   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:28.913896   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:28.950743   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:29.075794   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:29.406143   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:29.451285   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:29.575529   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:29.907359   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:29.957460   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:30.076035   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:30.168660   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:30.406406   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:30.452463   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:30.575426   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:30.906977   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:30.950907   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:31.077078   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:31.406828   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:31.454123   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:31.578153   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:31.906474   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:31.952754   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:32.076670   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:32.407091   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:32.453414   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:32.575549   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:32.668316   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:32.908018   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:32.952661   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:33.076481   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:33.410706   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:33.451277   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:33.576622   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:33.907401   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:33.951514   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:34.075282   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:34.407256   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:34.451108   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:34.579342   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:34.676640   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:34.906141   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:34.956726   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:35.075686   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:35.406820   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:35.453820   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:35.576250   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:35.906663   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:35.951044   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:36.076560   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:36.407084   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:36.451956   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:36.575949   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:36.906810   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:36.950978   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:37.075890   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:37.166728   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:37.406981   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:37.451292   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:37.575741   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:37.907555   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:37.959324   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:38.076300   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:38.406819   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:38.452106   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:38.575802   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:38.906940   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:38.950461   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:39.077220   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:39.169768   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:39.407179   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:39.451078   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:39.575479   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:39.906333   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:39.951403   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:40.077203   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:40.406113   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:40.451688   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:40.576302   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:40.906270   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:40.950983   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:41.075790   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:41.173070   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:41.406507   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:41.451648   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:41.575746   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:41.908541   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:41.958094   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:42.075772   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:42.408402   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:42.451554   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:42.575941   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:42.907111   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:42.950541   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:43.075230   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:43.406984   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:43.451453   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:43.576062   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:43.669659   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:43.908444   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:43.952822   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:44.076217   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:44.406845   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:44.450524   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:44.575320   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:44.906560   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:44.953342   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:45.076855   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:45.407627   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:45.461022   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:45.575501   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:45.906776   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:45.956492   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:46.075965   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:46.167194   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:46.407321   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:46.451081   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:46.575588   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:46.907689   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:46.954517   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:47.076402   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:47.428695   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:47.457395   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:47.575206   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:47.907722   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:47.950054   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:48.078007   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:48.170614   13597 pod_ready.go:102] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"False"
	I0318 20:33:48.412662   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:48.460710   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:48.576854   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:48.907241   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:48.950892   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:49.075777   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:49.406868   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:49.461279   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:49.578253   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:49.667170   13597 pod_ready.go:92] pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace has status "Ready":"True"
	I0318 20:33:49.667194   13597 pod_ready.go:81] duration metric: took 1m2.506536742s for pod "metrics-server-69cf46c98-4nzjv" in "kube-system" namespace to be "Ready" ...
	I0318 20:33:49.667205   13597 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n5nbn" in "kube-system" namespace to be "Ready" ...
	I0318 20:33:49.672240   13597 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-n5nbn" in "kube-system" namespace has status "Ready":"True"
	I0318 20:33:49.672258   13597 pod_ready.go:81] duration metric: took 5.045992ms for pod "nvidia-device-plugin-daemonset-n5nbn" in "kube-system" namespace to be "Ready" ...
	I0318 20:33:49.672282   13597 pod_ready.go:38] duration metric: took 1m3.75076494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:33:49.672301   13597 api_server.go:52] waiting for apiserver process to appear ...
	I0318 20:33:49.672334   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 20:33:49.672395   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 20:33:49.839174   13597 cri.go:89] found id: "c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:33:49.839201   13597 cri.go:89] found id: ""
	I0318 20:33:49.839209   13597 logs.go:276] 1 containers: [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c]
	I0318 20:33:49.839267   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:49.846202   13597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 20:33:49.846249   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 20:33:49.896754   13597 cri.go:89] found id: "d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:33:49.896789   13597 cri.go:89] found id: ""
	I0318 20:33:49.896799   13597 logs.go:276] 1 containers: [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738]
	I0318 20:33:49.896849   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:49.908106   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:49.908575   13597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 20:33:49.908630   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 20:33:49.951027   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:49.987226   13597 cri.go:89] found id: "8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:33:49.987248   13597 cri.go:89] found id: ""
	I0318 20:33:49.987255   13597 logs.go:276] 1 containers: [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14]
	I0318 20:33:49.987304   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:49.995553   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 20:33:49.995620   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 20:33:50.055718   13597 cri.go:89] found id: "d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:33:50.055746   13597 cri.go:89] found id: ""
	I0318 20:33:50.055755   13597 logs.go:276] 1 containers: [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e]
	I0318 20:33:50.055807   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:50.060358   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 20:33:50.060425   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 20:33:50.076384   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:50.130066   13597 cri.go:89] found id: "142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:33:50.130085   13597 cri.go:89] found id: ""
	I0318 20:33:50.130094   13597 logs.go:276] 1 containers: [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746]
	I0318 20:33:50.130146   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:50.136295   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 20:33:50.136353   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 20:33:50.208725   13597 cri.go:89] found id: "cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:33:50.208746   13597 cri.go:89] found id: ""
	I0318 20:33:50.208753   13597 logs.go:276] 1 containers: [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d]
	I0318 20:33:50.208797   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:50.219772   13597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 20:33:50.219838   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 20:33:50.307525   13597 cri.go:89] found id: ""
	I0318 20:33:50.307552   13597 logs.go:276] 0 containers: []
	W0318 20:33:50.307562   13597 logs.go:278] No container was found matching "kindnet"
	I0318 20:33:50.307578   13597 logs.go:123] Gathering logs for container status ...
	I0318 20:33:50.307593   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 20:33:50.409553   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:50.449496   13597 logs.go:123] Gathering logs for kube-apiserver [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c] ...
	I0318 20:33:50.449525   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:33:50.462727   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:50.576183   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:50.599831   13597 logs.go:123] Gathering logs for dmesg ...
	I0318 20:33:50.599868   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 20:33:50.639900   13597 logs.go:123] Gathering logs for describe nodes ...
	I0318 20:33:50.639929   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 20:33:50.807308   13597 logs.go:123] Gathering logs for etcd [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738] ...
	I0318 20:33:50.807334   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:33:50.869166   13597 logs.go:123] Gathering logs for coredns [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14] ...
	I0318 20:33:50.869208   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:33:50.907413   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:50.946134   13597 logs.go:123] Gathering logs for kube-scheduler [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e] ...
	I0318 20:33:50.946169   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:33:50.957360   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:51.012039   13597 logs.go:123] Gathering logs for kube-proxy [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746] ...
	I0318 20:33:51.012071   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:33:51.076148   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:51.114960   13597 logs.go:123] Gathering logs for kube-controller-manager [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d] ...
	I0318 20:33:51.115001   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:33:51.218773   13597 logs.go:123] Gathering logs for kubelet ...
	I0318 20:33:51.218799   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 20:33:51.310067   13597 logs.go:123] Gathering logs for CRI-O ...
	I0318 20:33:51.310098   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 20:33:51.406623   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:51.455346   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:51.576340   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:51.907705   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:51.951192   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:52.075913   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:52.407164   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:52.454334   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:52.576941   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:52.907000   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:52.950378   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:53.076628   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:53.406855   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:53.451176   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:53.575700   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:54.109117   13597 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:33:54.134971   13597 api_server.go:72] duration metric: took 1m18.712804653s to wait for apiserver process to appear ...
	I0318 20:33:54.134997   13597 api_server.go:88] waiting for apiserver healthz status ...
	I0318 20:33:54.135032   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 20:33:54.135082   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 20:33:54.185732   13597 cri.go:89] found id: "c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:33:54.185756   13597 cri.go:89] found id: ""
	I0318 20:33:54.185763   13597 logs.go:276] 1 containers: [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c]
	I0318 20:33:54.185807   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:54.198118   13597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 20:33:54.198171   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 20:33:54.244972   13597 cri.go:89] found id: "d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:33:54.244992   13597 cri.go:89] found id: ""
	I0318 20:33:54.245004   13597 logs.go:276] 1 containers: [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738]
	I0318 20:33:54.245057   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:54.250057   13597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 20:33:54.250105   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 20:33:54.275617   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:54.278272   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:54.279334   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:54.347436   13597 cri.go:89] found id: "8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:33:54.347458   13597 cri.go:89] found id: ""
	I0318 20:33:54.347466   13597 logs.go:276] 1 containers: [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14]
	I0318 20:33:54.347511   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:54.366003   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 20:33:54.366074   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 20:33:54.406132   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:54.451290   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:54.473321   13597 cri.go:89] found id: "d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:33:54.473342   13597 cri.go:89] found id: ""
	I0318 20:33:54.473351   13597 logs.go:276] 1 containers: [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e]
	I0318 20:33:54.473416   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:54.489472   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 20:33:54.489533   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 20:33:54.578633   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:54.590923   13597 cri.go:89] found id: "142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:33:54.590946   13597 cri.go:89] found id: ""
	I0318 20:33:54.590956   13597 logs.go:276] 1 containers: [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746]
	I0318 20:33:54.591016   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:54.596843   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 20:33:54.596908   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 20:33:54.645831   13597 cri.go:89] found id: "cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:33:54.645854   13597 cri.go:89] found id: ""
	I0318 20:33:54.645863   13597 logs.go:276] 1 containers: [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d]
	I0318 20:33:54.645911   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:54.650783   13597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 20:33:54.650842   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 20:33:54.697899   13597 cri.go:89] found id: ""
	I0318 20:33:54.697922   13597 logs.go:276] 0 containers: []
	W0318 20:33:54.697930   13597 logs.go:278] No container was found matching "kindnet"
	I0318 20:33:54.697938   13597 logs.go:123] Gathering logs for kubelet ...
	I0318 20:33:54.697949   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 20:33:54.774131   13597 logs.go:123] Gathering logs for dmesg ...
	I0318 20:33:54.774163   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 20:33:54.794242   13597 logs.go:123] Gathering logs for kube-controller-manager [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d] ...
	I0318 20:33:54.794266   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:33:54.899232   13597 logs.go:123] Gathering logs for container status ...
	I0318 20:33:54.899261   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 20:33:54.907025   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:54.953835   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:54.998076   13597 logs.go:123] Gathering logs for describe nodes ...
	I0318 20:33:54.998108   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 20:33:55.075691   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:55.219316   13597 logs.go:123] Gathering logs for kube-apiserver [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c] ...
	I0318 20:33:55.219353   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:33:55.331728   13597 logs.go:123] Gathering logs for etcd [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738] ...
	I0318 20:33:55.331758   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:33:55.395611   13597 logs.go:123] Gathering logs for coredns [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14] ...
	I0318 20:33:55.395640   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:33:55.406942   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:55.451198   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:55.493357   13597 logs.go:123] Gathering logs for kube-scheduler [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e] ...
	I0318 20:33:55.493392   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:33:55.576137   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:55.583503   13597 logs.go:123] Gathering logs for kube-proxy [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746] ...
	I0318 20:33:55.583527   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:33:55.641057   13597 logs.go:123] Gathering logs for CRI-O ...
	I0318 20:33:55.641083   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 20:33:55.906848   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:55.951275   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:56.075632   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:56.411723   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:56.458026   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:56.580655   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:56.906707   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:56.950221   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:57.076211   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:57.406756   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:57.489724   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:57.577176   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:57.908386   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:57.957247   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:58.075801   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:58.409435   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:58.452399   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:58.535827   13597 api_server.go:253] Checking apiserver healthz at https://192.168.39.131:8443/healthz ...
	I0318 20:33:58.541819   13597 api_server.go:279] https://192.168.39.131:8443/healthz returned 200:
	ok
	I0318 20:33:58.543001   13597 api_server.go:141] control plane version: v1.28.4
	I0318 20:33:58.543020   13597 api_server.go:131] duration metric: took 4.408016812s to wait for apiserver health ...
	I0318 20:33:58.543031   13597 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 20:33:58.543060   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 20:33:58.543104   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 20:33:58.575985   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:58.707000   13597 cri.go:89] found id: "c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:33:58.707024   13597 cri.go:89] found id: ""
	I0318 20:33:58.707031   13597 logs.go:276] 1 containers: [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c]
	I0318 20:33:58.707075   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:58.731748   13597 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 20:33:58.731805   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 20:33:58.834735   13597 cri.go:89] found id: "d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:33:58.834756   13597 cri.go:89] found id: ""
	I0318 20:33:58.834763   13597 logs.go:276] 1 containers: [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738]
	I0318 20:33:58.834806   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:58.853661   13597 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 20:33:58.853734   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 20:33:58.906706   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:58.936975   13597 cri.go:89] found id: "8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:33:58.936999   13597 cri.go:89] found id: ""
	I0318 20:33:58.937009   13597 logs.go:276] 1 containers: [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14]
	I0318 20:33:58.937062   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:58.955489   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:58.968575   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 20:33:58.968648   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 20:33:59.040246   13597 cri.go:89] found id: "d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:33:59.040272   13597 cri.go:89] found id: ""
	I0318 20:33:59.040281   13597 logs.go:276] 1 containers: [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e]
	I0318 20:33:59.040333   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:59.050554   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 20:33:59.050617   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 20:33:59.076089   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:59.122011   13597 cri.go:89] found id: "142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:33:59.122037   13597 cri.go:89] found id: ""
	I0318 20:33:59.122047   13597 logs.go:276] 1 containers: [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746]
	I0318 20:33:59.122103   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:59.135113   13597 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 20:33:59.135173   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 20:33:59.184804   13597 cri.go:89] found id: "cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:33:59.184830   13597 cri.go:89] found id: ""
	I0318 20:33:59.184839   13597 logs.go:276] 1 containers: [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d]
	I0318 20:33:59.184892   13597 ssh_runner.go:195] Run: which crictl
	I0318 20:33:59.190226   13597 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 20:33:59.190272   13597 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 20:33:59.234373   13597 cri.go:89] found id: ""
	I0318 20:33:59.234397   13597 logs.go:276] 0 containers: []
	W0318 20:33:59.234408   13597 logs.go:278] No container was found matching "kindnet"
	I0318 20:33:59.234418   13597 logs.go:123] Gathering logs for describe nodes ...
	I0318 20:33:59.234432   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 20:33:59.407191   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:59.442431   13597 logs.go:123] Gathering logs for etcd [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738] ...
	I0318 20:33:59.442459   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:33:59.451429   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:33:59.576892   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:33:59.615787   13597 logs.go:123] Gathering logs for coredns [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14] ...
	I0318 20:33:59.615836   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:33:59.834693   13597 logs.go:123] Gathering logs for kube-proxy [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746] ...
	I0318 20:33:59.834739   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:33:59.906277   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:33:59.956702   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:00.040559   13597 logs.go:123] Gathering logs for kube-controller-manager [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d] ...
	I0318 20:34:00.040591   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:34:00.084851   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:00.208077   13597 logs.go:123] Gathering logs for container status ...
	I0318 20:34:00.208106   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 20:34:00.318913   13597 logs.go:123] Gathering logs for dmesg ...
	I0318 20:34:00.318945   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 20:34:00.371589   13597 logs.go:123] Gathering logs for kube-apiserver [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c] ...
	I0318 20:34:00.371615   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:34:00.406386   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:00.441261   13597 logs.go:123] Gathering logs for kube-scheduler [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e] ...
	I0318 20:34:00.441287   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:34:00.450430   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:00.554624   13597 logs.go:123] Gathering logs for CRI-O ...
	I0318 20:34:00.554648   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 20:34:00.576314   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:00.907980   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:00.943600   13597 logs.go:123] Gathering logs for kubelet ...
	I0318 20:34:00.943633   13597 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 20:34:00.958158   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:01.075970   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:01.407230   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:01.452469   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:01.577996   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:01.906570   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:01.951443   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:02.076113   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:02.407512   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:02.454468   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:02.575567   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:02.906631   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:02.952054   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:03.075804   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:03.406464   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:03.455034   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:03.547455   13597 system_pods.go:59] 18 kube-system pods found
	I0318 20:34:03.547485   13597 system_pods.go:61] "coredns-5dd5756b68-8jcwf" [10c7519d-f00c-4a39-bbc4-fd41f886d578] Running
	I0318 20:34:03.547489   13597 system_pods.go:61] "csi-hostpath-attacher-0" [7f7ed96b-6f36-4a04-93fa-ad458f578149] Running
	I0318 20:34:03.547493   13597 system_pods.go:61] "csi-hostpath-resizer-0" [68b27cdd-0cb7-4eae-9b63-0976e103bd21] Running
	I0318 20:34:03.547499   13597 system_pods.go:61] "csi-hostpathplugin-drcv5" [dc2e515c-a16d-4e0a-9d92-4edddbba263e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 20:34:03.547504   13597 system_pods.go:61] "etcd-addons-791443" [98d495c5-b57e-4c16-84fd-8c53de3d2f6f] Running
	I0318 20:34:03.547509   13597 system_pods.go:61] "kube-apiserver-addons-791443" [4a010a2a-6a8b-4d3b-acd7-c4b7dd286e57] Running
	I0318 20:34:03.547512   13597 system_pods.go:61] "kube-controller-manager-addons-791443" [96ba543e-5e42-4cff-9e47-2e7dfa30583e] Running
	I0318 20:34:03.547516   13597 system_pods.go:61] "kube-ingress-dns-minikube" [89b0c3fb-9b61-4bc6-a3a0-01591b00214c] Running
	I0318 20:34:03.547520   13597 system_pods.go:61] "kube-proxy-4wrfg" [0f3b6822-070e-4e10-9964-72c49a522b3a] Running
	I0318 20:34:03.547525   13597 system_pods.go:61] "kube-scheduler-addons-791443" [3a876e0d-370f-4214-8c8b-c202c3eec9bd] Running
	I0318 20:34:03.547529   13597 system_pods.go:61] "metrics-server-69cf46c98-4nzjv" [b7ecdb56-4ae5-4112-9a8b-40564207c8ff] Running
	I0318 20:34:03.547534   13597 system_pods.go:61] "nvidia-device-plugin-daemonset-n5nbn" [45861794-8d7c-49b5-8748-20d3e179d433] Running
	I0318 20:34:03.547544   13597 system_pods.go:61] "registry-m9jd7" [b402e103-9225-45b0-811b-bc35d410e2a6] Running
	I0318 20:34:03.547549   13597 system_pods.go:61] "registry-proxy-br298" [17ae91fe-5e22-4c2b-8b5a-9bfb300a1126] Running
	I0318 20:34:03.547553   13597 system_pods.go:61] "snapshot-controller-58dbcc7b99-8tflr" [9c96fe29-b487-4cc3-a358-b8e659d3c316] Running
	I0318 20:34:03.547559   13597 system_pods.go:61] "snapshot-controller-58dbcc7b99-b54sp" [d2e9b9f2-196c-4001-90d2-54d70f2ca4bd] Running
	I0318 20:34:03.547564   13597 system_pods.go:61] "storage-provisioner" [a6846633-0977-43e0-bfaf-958353d2befc] Running
	I0318 20:34:03.547569   13597 system_pods.go:61] "tiller-deploy-7b677967b9-cksth" [5a800c36-110f-45ae-aabb-2a2089254b00] Running
	I0318 20:34:03.547577   13597 system_pods.go:74] duration metric: took 5.004535213s to wait for pod list to return data ...
	I0318 20:34:03.547593   13597 default_sa.go:34] waiting for default service account to be created ...
	I0318 20:34:03.550173   13597 default_sa.go:45] found service account: "default"
	I0318 20:34:03.550192   13597 default_sa.go:55] duration metric: took 2.593196ms for default service account to be created ...
	I0318 20:34:03.550200   13597 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 20:34:03.559567   13597 system_pods.go:86] 18 kube-system pods found
	I0318 20:34:03.559586   13597 system_pods.go:89] "coredns-5dd5756b68-8jcwf" [10c7519d-f00c-4a39-bbc4-fd41f886d578] Running
	I0318 20:34:03.559591   13597 system_pods.go:89] "csi-hostpath-attacher-0" [7f7ed96b-6f36-4a04-93fa-ad458f578149] Running
	I0318 20:34:03.559596   13597 system_pods.go:89] "csi-hostpath-resizer-0" [68b27cdd-0cb7-4eae-9b63-0976e103bd21] Running
	I0318 20:34:03.559602   13597 system_pods.go:89] "csi-hostpathplugin-drcv5" [dc2e515c-a16d-4e0a-9d92-4edddbba263e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 20:34:03.559607   13597 system_pods.go:89] "etcd-addons-791443" [98d495c5-b57e-4c16-84fd-8c53de3d2f6f] Running
	I0318 20:34:03.559613   13597 system_pods.go:89] "kube-apiserver-addons-791443" [4a010a2a-6a8b-4d3b-acd7-c4b7dd286e57] Running
	I0318 20:34:03.559617   13597 system_pods.go:89] "kube-controller-manager-addons-791443" [96ba543e-5e42-4cff-9e47-2e7dfa30583e] Running
	I0318 20:34:03.559622   13597 system_pods.go:89] "kube-ingress-dns-minikube" [89b0c3fb-9b61-4bc6-a3a0-01591b00214c] Running
	I0318 20:34:03.559626   13597 system_pods.go:89] "kube-proxy-4wrfg" [0f3b6822-070e-4e10-9964-72c49a522b3a] Running
	I0318 20:34:03.559630   13597 system_pods.go:89] "kube-scheduler-addons-791443" [3a876e0d-370f-4214-8c8b-c202c3eec9bd] Running
	I0318 20:34:03.559635   13597 system_pods.go:89] "metrics-server-69cf46c98-4nzjv" [b7ecdb56-4ae5-4112-9a8b-40564207c8ff] Running
	I0318 20:34:03.559640   13597 system_pods.go:89] "nvidia-device-plugin-daemonset-n5nbn" [45861794-8d7c-49b5-8748-20d3e179d433] Running
	I0318 20:34:03.559647   13597 system_pods.go:89] "registry-m9jd7" [b402e103-9225-45b0-811b-bc35d410e2a6] Running
	I0318 20:34:03.559654   13597 system_pods.go:89] "registry-proxy-br298" [17ae91fe-5e22-4c2b-8b5a-9bfb300a1126] Running
	I0318 20:34:03.559662   13597 system_pods.go:89] "snapshot-controller-58dbcc7b99-8tflr" [9c96fe29-b487-4cc3-a358-b8e659d3c316] Running
	I0318 20:34:03.559669   13597 system_pods.go:89] "snapshot-controller-58dbcc7b99-b54sp" [d2e9b9f2-196c-4001-90d2-54d70f2ca4bd] Running
	I0318 20:34:03.559683   13597 system_pods.go:89] "storage-provisioner" [a6846633-0977-43e0-bfaf-958353d2befc] Running
	I0318 20:34:03.559689   13597 system_pods.go:89] "tiller-deploy-7b677967b9-cksth" [5a800c36-110f-45ae-aabb-2a2089254b00] Running
	I0318 20:34:03.559697   13597 system_pods.go:126] duration metric: took 9.4912ms to wait for k8s-apps to be running ...
	I0318 20:34:03.559707   13597 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 20:34:03.559749   13597 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:34:03.576564   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:03.577338   13597 system_svc.go:56] duration metric: took 17.626773ms WaitForService to wait for kubelet
	I0318 20:34:03.577360   13597 kubeadm.go:576] duration metric: took 1m28.155195742s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:34:03.577387   13597 node_conditions.go:102] verifying NodePressure condition ...
	I0318 20:34:03.580306   13597 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:34:03.580327   13597 node_conditions.go:123] node cpu capacity is 2
	I0318 20:34:03.580339   13597 node_conditions.go:105] duration metric: took 2.945155ms to run NodePressure ...
	I0318 20:34:03.580350   13597 start.go:240] waiting for startup goroutines ...
	I0318 20:34:03.922994   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:03.951517   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:04.075233   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:04.407247   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:04.454784   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:04.575747   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:04.907121   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:04.950480   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:05.075956   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:05.406529   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:05.456319   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:05.576961   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:05.907426   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:05.951511   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 20:34:06.076377   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:06.407336   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:06.453298   13597 kapi.go:107] duration metric: took 1m19.008230195s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 20:34:06.575861   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:06.906795   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:07.075971   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:07.408782   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:07.576335   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:07.906179   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:08.076567   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:08.409236   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:08.576367   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:08.906756   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:09.076177   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:09.406778   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:09.576214   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:09.906817   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:10.075912   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:10.406837   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:10.576369   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:10.907864   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:11.076131   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:11.406760   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:11.576171   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:11.906617   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:12.075820   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:12.406392   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:12.575773   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:12.906455   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:13.075624   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:13.406056   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:13.576465   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:13.910555   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:14.075790   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:14.406763   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:14.576508   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:14.908336   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:15.076070   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:15.408201   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:15.575846   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:15.906713   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:16.075285   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:16.406935   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:16.576491   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:16.906684   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:17.075786   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:17.407706   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:17.575629   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:17.906488   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:18.075656   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:18.407928   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:18.576287   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:18.908078   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:19.078462   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:19.408166   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:19.576595   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:19.907000   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:20.076492   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:20.407209   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:20.576189   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:20.906276   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:21.076602   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:21.406295   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:21.576961   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:21.907151   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:22.076074   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:22.407285   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:22.576006   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:22.906859   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:23.076084   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:23.406449   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:23.575701   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:23.906536   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:24.076203   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:24.407897   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:24.576711   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:24.906935   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:25.079659   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:25.406228   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:25.576809   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:25.908835   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:26.078233   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:26.409137   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:26.576269   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:26.907525   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:27.077475   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:27.407528   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:27.576233   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:27.906521   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:28.076353   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:28.407284   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:28.577093   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:28.907270   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:29.078579   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:29.407622   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:29.575945   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:29.906824   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:30.076721   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:30.406613   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:30.576087   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:30.907775   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:31.076163   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:31.407368   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:31.575983   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:31.906748   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:32.075904   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:32.408711   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:32.576398   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:32.906666   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:33.075894   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:33.407302   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:33.576209   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:33.906545   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:34.077089   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:34.408500   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:34.576860   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:34.907486   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:35.075946   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:35.407611   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:35.576605   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:35.906690   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:36.075954   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:36.406371   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:36.575387   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:36.907591   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:37.076003   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:37.406516   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:37.576474   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:37.906671   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:38.076601   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:38.406801   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:38.576307   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:38.906499   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:39.075429   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:39.407151   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:39.576058   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:39.907451   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:40.075962   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:40.406214   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:40.575736   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:40.906876   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:41.076099   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:41.407053   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:41.576442   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:41.907601   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:42.076275   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:42.407766   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:42.575662   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:42.907285   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:43.076584   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:43.408156   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:43.576812   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:43.906433   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:44.075650   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:44.407497   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:44.575749   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:44.906189   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:45.077857   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:45.406936   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:45.575876   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:45.907817   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:46.077686   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:46.406201   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:46.576733   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:46.906420   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:47.076081   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:47.407024   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:47.575675   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:47.906859   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:48.076385   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:48.407075   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:48.576140   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:48.908605   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:49.076482   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:49.406740   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:49.575887   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:49.908679   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:50.076260   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:50.407448   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:50.576976   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:50.907146   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:51.076947   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:51.406296   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:51.576185   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:51.906399   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:52.075429   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:52.407647   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:52.576532   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:52.907021   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:53.075928   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:53.407405   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:53.575573   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:53.907696   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:54.076115   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:54.407239   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:54.577761   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:54.906598   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:55.075775   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:55.406472   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:55.576632   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:55.905851   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:56.076126   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:56.406450   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:56.575490   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:56.909627   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:57.075868   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:57.406255   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:57.576201   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:57.906186   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:58.076215   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:58.407147   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:58.576026   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:58.907115   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:59.076282   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:59.406866   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:34:59.575943   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:34:59.907074   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:00.076519   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:00.407267   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:00.575335   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:00.906839   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:01.075359   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:01.409002   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:01.576551   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:01.906968   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:02.076099   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:02.409335   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:02.575799   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:02.907260   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:03.075977   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:03.407017   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:03.576189   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:03.907156   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:04.077523   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:04.405970   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:04.576104   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:04.906083   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:05.076682   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:05.406475   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:05.575655   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:05.906893   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:06.075779   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:06.407280   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:06.880605   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:06.906724   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:07.075818   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:07.405865   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:07.575873   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:07.907700   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:08.076227   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:08.407341   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:08.576487   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:08.906227   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:09.076184   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:09.408076   13597 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 20:35:09.575563   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:09.907067   13597 kapi.go:107] duration metric: took 2m24.005489362s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 20:35:10.076956   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:10.576169   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:11.076241   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:11.577049   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:12.076944   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:12.579242   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:13.076551   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:13.577127   13597 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 20:35:14.082525   13597 kapi.go:107] duration metric: took 2m24.510695519s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 20:35:14.084325   13597 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-791443 cluster.
	I0318 20:35:14.085716   13597 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 20:35:14.087037   13597 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 20:35:14.088305   13597 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, default-storageclass, helm-tiller, cloud-spanner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0318 20:35:14.089548   13597 addons.go:505] duration metric: took 2m38.667322221s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin default-storageclass helm-tiller cloud-spanner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0318 20:35:14.089582   13597 start.go:245] waiting for cluster config update ...
	I0318 20:35:14.089598   13597 start.go:254] writing updated cluster config ...
	I0318 20:35:14.089826   13597 ssh_runner.go:195] Run: rm -f paused
	I0318 20:35:14.146782   13597 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 20:35:14.148195   13597 out.go:177] * Done! kubectl is now configured to use "addons-791443" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.837750339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710794302837725926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564976,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e8a806b-1d4e-40bd-9b41-cd0814e6e13c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.838279622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bf1b00b-3214-4cb8-bc50-2c1fc7d1dbf1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.838410966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bf1b00b-3214-4cb8-bc50-2c1fc7d1dbf1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.838790417Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df909de911a41857f8504568088a73f9d12b8d97011835400ef848dc73fd1e60,PodSandboxId:058169ced77ee55cf4fef7a8b85d796effe9bf4c4e3f9ec9c02a7fa1aafba2f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710794295836234643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-pn5rb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6de17ae2-1218-40c6-be81-3b1b970505dc,},Annotations:map[string]string{io.kubernetes.container.hash: ec570da2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84847aa229cc7d9680dd7aa174d7b95be3694f484f5a6a148885a3c49fa6c683,PodSandboxId:6e684ddafb12b4e59d55596a65a397811bd735a41a2228e9d3e15956ebfccbcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710794153375774029,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c02a476d-3fe2-4ed7-8f9f-166a582aa95e,},Annotations:map[string]string{io.kubern
etes.container.hash: 2ca4a9e6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c145eb83d97a54df99463919204e6080ef8813be3527046b772548b8579592e,PodSandboxId:e05dda9f4b926232fee50c5089d25fbe20a6a1ba9a5740340f8e3f31fd522167,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710794144549120616,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dv5fw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1de39436-0b96-4eb9-86be-ee2280b59105,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc048d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667af4104ea3fa883037ccd2a63a33cc84f1cc164fb93eaf5f1c0dc8d7efe7b,PodSandboxId:c35d952b8d84663841ba911efce62e0b576e62718bb812c267d45882c35ba163,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710794113593223965,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-4w54p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: feac56a4-1804-4cbd-b14c-cdc3a1c70b65,},Annotations:map[string]string{io.kubernetes.container.hash: ab93b430,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ae14a206539829b002cd34095ff6612b7b302384f9176a3139671c3eb26a31,PodSandboxId:dba17e07cef8eb2f1c19233cca375ad132dfb7a05fef7d26ff81b9b9c9288de4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710794030340810984,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s52ss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2bf5e487-1a92-4a2f-8b5a-43278f1d55c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ff80d6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cec3c0c08a540b2eff603a1eb7bf702675ade7672be27494cbe0579480e2091,PodSandboxId:30cc00415ec47bd084809de98e0b1613f61080406615579f3773d31013f866d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710794030210804435,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wzp7n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ec3c789-cb95-4656-97da-67ae6d6e3a33,},Annotations:map[string]string{io.kubernetes.container.hash: f1625612,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6853713e261b4f7aea473f5b1b14bb6c73630661bd165ea093d276345cb88d,PodSandboxId:9fe325d83a17c898121d55bfee063d90815bbe19dca847b08a676688853f7a53,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710794025101252484,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-ltsjd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 26c81ded-e590-46e9-8475-2ce1075fe93e,},Annotations:map[string]string{io.kubernetes.container.hash: f5a80d66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b1d82d4b0a1ab1555a1e99d99b00b421537c51bec6346fe848190e9a589432,PodSandboxId:8e38044395255a944c95c177eeb26e908f195c28f110e8663a1f66d8d92c10d2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710794008459243743,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-nvjkr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8595c6c7-7fa6-468a-bb41-23e8c560faa3,},Annotations:map[string]string{io.kubernetes.container.hash: c6f703d5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125c2ea93a71d470fdae82cc4500916c0ab8ac301baabf32c198275710ea0f64,PodSandboxId:313e96ef2d6a9dfed6b007924b8fc48bc1a833e275660373bbd41f0ef45f739f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710793964680187596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6846633-0977-43e0-bfaf-958353d2befc,},Annotations:map[string]string{io.kubernetes.container.hash: 4257eec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14,PodSandboxId:5af4f54d2582f3b6fe0923c3972e51a86dbc1f5fb6b7a613ccf8926834e41e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d
672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710793958875630340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8jcwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c7519d-f00c-4a39-bbc4-fd41f886d578,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac0e26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746,PodSand
boxId:8fb53df134c3f6b357251ed82a33b9509f7a900d1e73e79ed0c024b55ea68a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710793957022217472,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wrfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3b6822-070e-4e10-9964-72c49a522b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6d102f21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e,PodSandboxId:02f924df49a73e5d26af6d7f714
148ea7519c533fd35391c81012d172af743e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710793937296313170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066529f01bd636633e27126bee27f44b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738,PodSandboxId:b527a30217428cdff4a442eeaca2a6b160ccbe049754
3a1223db9af641ab001f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710793937298603765,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 325a0c4eb50d55e198f0879629211285,},Annotations:map[string]string{io.kubernetes.container.hash: 2aedd6cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c,PodSandboxId:51eb9ee72fdaeeaef252de3f88b2b23fb516c88359b76e4c18a76ec132cfb25d,Metadata:&ContainerMetadat
a{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710793937225862592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abff357939d8fb0a363799545324f518,},Annotations:map[string]string{io.kubernetes.container.hash: 6eaf7f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d,PodSandboxId:de0031a57f9a2ad8bebde7e8dbab1f27b2ce165c4aaa6ef4cf8cba928f1f0047,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710793937184904841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47edc23db360d946c8e51334022b218b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bf1b00b-3214-4cb8-bc50-2c1fc7d1dbf1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.880853177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b49eabb8-34ca-4aa0-b82e-3fe970e8a95d name=/runtime.v1.RuntimeService/Version
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.880927650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b49eabb8-34ca-4aa0-b82e-3fe970e8a95d name=/runtime.v1.RuntimeService/Version
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.881784774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5749f1b4-87ae-43e7-b81b-33743623cc11 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.882990222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710794302882964298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564976,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5749f1b4-87ae-43e7-b81b-33743623cc11 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.883716350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcba6e18-31bd-4290-8a3a-030c6e8c184f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.883828241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcba6e18-31bd-4290-8a3a-030c6e8c184f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.884556970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df909de911a41857f8504568088a73f9d12b8d97011835400ef848dc73fd1e60,PodSandboxId:058169ced77ee55cf4fef7a8b85d796effe9bf4c4e3f9ec9c02a7fa1aafba2f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710794295836234643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-pn5rb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6de17ae2-1218-40c6-be81-3b1b970505dc,},Annotations:map[string]string{io.kubernetes.container.hash: ec570da2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84847aa229cc7d9680dd7aa174d7b95be3694f484f5a6a148885a3c49fa6c683,PodSandboxId:6e684ddafb12b4e59d55596a65a397811bd735a41a2228e9d3e15956ebfccbcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710794153375774029,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c02a476d-3fe2-4ed7-8f9f-166a582aa95e,},Annotations:map[string]string{io.kubern
etes.container.hash: 2ca4a9e6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c145eb83d97a54df99463919204e6080ef8813be3527046b772548b8579592e,PodSandboxId:e05dda9f4b926232fee50c5089d25fbe20a6a1ba9a5740340f8e3f31fd522167,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710794144549120616,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dv5fw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1de39436-0b96-4eb9-86be-ee2280b59105,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc048d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667af4104ea3fa883037ccd2a63a33cc84f1cc164fb93eaf5f1c0dc8d7efe7b,PodSandboxId:c35d952b8d84663841ba911efce62e0b576e62718bb812c267d45882c35ba163,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710794113593223965,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-4w54p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: feac56a4-1804-4cbd-b14c-cdc3a1c70b65,},Annotations:map[string]string{io.kubernetes.container.hash: ab93b430,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ae14a206539829b002cd34095ff6612b7b302384f9176a3139671c3eb26a31,PodSandboxId:dba17e07cef8eb2f1c19233cca375ad132dfb7a05fef7d26ff81b9b9c9288de4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710794030340810984,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s52ss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2bf5e487-1a92-4a2f-8b5a-43278f1d55c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ff80d6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cec3c0c08a540b2eff603a1eb7bf702675ade7672be27494cbe0579480e2091,PodSandboxId:30cc00415ec47bd084809de98e0b1613f61080406615579f3773d31013f866d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710794030210804435,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wzp7n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ec3c789-cb95-4656-97da-67ae6d6e3a33,},Annotations:map[string]string{io.kubernetes.container.hash: f1625612,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6853713e261b4f7aea473f5b1b14bb6c73630661bd165ea093d276345cb88d,PodSandboxId:9fe325d83a17c898121d55bfee063d90815bbe19dca847b08a676688853f7a53,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710794025101252484,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-ltsjd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 26c81ded-e590-46e9-8475-2ce1075fe93e,},Annotations:map[string]string{io.kubernetes.container.hash: f5a80d66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b1d82d4b0a1ab1555a1e99d99b00b421537c51bec6346fe848190e9a589432,PodSandboxId:8e38044395255a944c95c177eeb26e908f195c28f110e8663a1f66d8d92c10d2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710794008459243743,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-nvjkr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8595c6c7-7fa6-468a-bb41-23e8c560faa3,},Annotations:map[string]string{io.kubernetes.container.hash: c6f703d5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125c2ea93a71d470fdae82cc4500916c0ab8ac301baabf32c198275710ea0f64,PodSandboxId:313e96ef2d6a9dfed6b007924b8fc48bc1a833e275660373bbd41f0ef45f739f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710793964680187596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6846633-0977-43e0-bfaf-958353d2befc,},Annotations:map[string]string{io.kubernetes.container.hash: 4257eec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14,PodSandboxId:5af4f54d2582f3b6fe0923c3972e51a86dbc1f5fb6b7a613ccf8926834e41e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d
672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710793958875630340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8jcwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c7519d-f00c-4a39-bbc4-fd41f886d578,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac0e26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746,PodSand
boxId:8fb53df134c3f6b357251ed82a33b9509f7a900d1e73e79ed0c024b55ea68a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710793957022217472,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wrfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3b6822-070e-4e10-9964-72c49a522b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6d102f21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e,PodSandboxId:02f924df49a73e5d26af6d7f714
148ea7519c533fd35391c81012d172af743e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710793937296313170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066529f01bd636633e27126bee27f44b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738,PodSandboxId:b527a30217428cdff4a442eeaca2a6b160ccbe049754
3a1223db9af641ab001f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710793937298603765,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 325a0c4eb50d55e198f0879629211285,},Annotations:map[string]string{io.kubernetes.container.hash: 2aedd6cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c,PodSandboxId:51eb9ee72fdaeeaef252de3f88b2b23fb516c88359b76e4c18a76ec132cfb25d,Metadata:&ContainerMetadat
a{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710793937225862592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abff357939d8fb0a363799545324f518,},Annotations:map[string]string{io.kubernetes.container.hash: 6eaf7f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d,PodSandboxId:de0031a57f9a2ad8bebde7e8dbab1f27b2ce165c4aaa6ef4cf8cba928f1f0047,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710793937184904841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47edc23db360d946c8e51334022b218b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcba6e18-31bd-4290-8a3a-030c6e8c184f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.919972981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c46b355-3d04-49dc-b12d-5f4e12b534a2 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.920062132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c46b355-3d04-49dc-b12d-5f4e12b534a2 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.921148924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f38066b5-95e6-43db-a95e-ce2e17dc3d2b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.922869069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710794302922841427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564976,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f38066b5-95e6-43db-a95e-ce2e17dc3d2b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.923349769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0ca942f-7066-405a-9f8c-2e2662ad2010 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.923405434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0ca942f-7066-405a-9f8c-2e2662ad2010 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.923820945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df909de911a41857f8504568088a73f9d12b8d97011835400ef848dc73fd1e60,PodSandboxId:058169ced77ee55cf4fef7a8b85d796effe9bf4c4e3f9ec9c02a7fa1aafba2f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710794295836234643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-pn5rb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6de17ae2-1218-40c6-be81-3b1b970505dc,},Annotations:map[string]string{io.kubernetes.container.hash: ec570da2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84847aa229cc7d9680dd7aa174d7b95be3694f484f5a6a148885a3c49fa6c683,PodSandboxId:6e684ddafb12b4e59d55596a65a397811bd735a41a2228e9d3e15956ebfccbcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710794153375774029,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c02a476d-3fe2-4ed7-8f9f-166a582aa95e,},Annotations:map[string]string{io.kubern
etes.container.hash: 2ca4a9e6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c145eb83d97a54df99463919204e6080ef8813be3527046b772548b8579592e,PodSandboxId:e05dda9f4b926232fee50c5089d25fbe20a6a1ba9a5740340f8e3f31fd522167,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710794144549120616,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dv5fw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1de39436-0b96-4eb9-86be-ee2280b59105,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc048d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667af4104ea3fa883037ccd2a63a33cc84f1cc164fb93eaf5f1c0dc8d7efe7b,PodSandboxId:c35d952b8d84663841ba911efce62e0b576e62718bb812c267d45882c35ba163,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710794113593223965,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-4w54p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: feac56a4-1804-4cbd-b14c-cdc3a1c70b65,},Annotations:map[string]string{io.kubernetes.container.hash: ab93b430,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ae14a206539829b002cd34095ff6612b7b302384f9176a3139671c3eb26a31,PodSandboxId:dba17e07cef8eb2f1c19233cca375ad132dfb7a05fef7d26ff81b9b9c9288de4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710794030340810984,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s52ss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2bf5e487-1a92-4a2f-8b5a-43278f1d55c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ff80d6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cec3c0c08a540b2eff603a1eb7bf702675ade7672be27494cbe0579480e2091,PodSandboxId:30cc00415ec47bd084809de98e0b1613f61080406615579f3773d31013f866d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710794030210804435,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wzp7n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ec3c789-cb95-4656-97da-67ae6d6e3a33,},Annotations:map[string]string{io.kubernetes.container.hash: f1625612,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6853713e261b4f7aea473f5b1b14bb6c73630661bd165ea093d276345cb88d,PodSandboxId:9fe325d83a17c898121d55bfee063d90815bbe19dca847b08a676688853f7a53,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710794025101252484,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-ltsjd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 26c81ded-e590-46e9-8475-2ce1075fe93e,},Annotations:map[string]string{io.kubernetes.container.hash: f5a80d66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b1d82d4b0a1ab1555a1e99d99b00b421537c51bec6346fe848190e9a589432,PodSandboxId:8e38044395255a944c95c177eeb26e908f195c28f110e8663a1f66d8d92c10d2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710794008459243743,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-nvjkr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8595c6c7-7fa6-468a-bb41-23e8c560faa3,},Annotations:map[string]string{io.kubernetes.container.hash: c6f703d5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125c2ea93a71d470fdae82cc4500916c0ab8ac301baabf32c198275710ea0f64,PodSandboxId:313e96ef2d6a9dfed6b007924b8fc48bc1a833e275660373bbd41f0ef45f739f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710793964680187596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6846633-0977-43e0-bfaf-958353d2befc,},Annotations:map[string]string{io.kubernetes.container.hash: 4257eec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14,PodSandboxId:5af4f54d2582f3b6fe0923c3972e51a86dbc1f5fb6b7a613ccf8926834e41e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d
672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710793958875630340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8jcwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c7519d-f00c-4a39-bbc4-fd41f886d578,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac0e26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746,PodSand
boxId:8fb53df134c3f6b357251ed82a33b9509f7a900d1e73e79ed0c024b55ea68a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710793957022217472,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wrfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3b6822-070e-4e10-9964-72c49a522b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6d102f21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e,PodSandboxId:02f924df49a73e5d26af6d7f714
148ea7519c533fd35391c81012d172af743e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710793937296313170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066529f01bd636633e27126bee27f44b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738,PodSandboxId:b527a30217428cdff4a442eeaca2a6b160ccbe049754
3a1223db9af641ab001f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710793937298603765,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 325a0c4eb50d55e198f0879629211285,},Annotations:map[string]string{io.kubernetes.container.hash: 2aedd6cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c,PodSandboxId:51eb9ee72fdaeeaef252de3f88b2b23fb516c88359b76e4c18a76ec132cfb25d,Metadata:&ContainerMetadat
a{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710793937225862592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abff357939d8fb0a363799545324f518,},Annotations:map[string]string{io.kubernetes.container.hash: 6eaf7f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d,PodSandboxId:de0031a57f9a2ad8bebde7e8dbab1f27b2ce165c4aaa6ef4cf8cba928f1f0047,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710793937184904841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47edc23db360d946c8e51334022b218b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0ca942f-7066-405a-9f8c-2e2662ad2010 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.968998477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0af63b14-e0e0-48d8-83ea-aa74b5bef643 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.969077884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0af63b14-e0e0-48d8-83ea-aa74b5bef643 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.970362637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87546b86-5827-428c-bade-c8dd9d2ac1c5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.972459293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710794302972433616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564976,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87546b86-5827-428c-bade-c8dd9d2ac1c5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.973110354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c662a88c-5da2-45d7-8545-4da1a7de96d4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.973191271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c662a88c-5da2-45d7-8545-4da1a7de96d4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:38:22 addons-791443 crio[688]: time="2024-03-18 20:38:22.973616490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df909de911a41857f8504568088a73f9d12b8d97011835400ef848dc73fd1e60,PodSandboxId:058169ced77ee55cf4fef7a8b85d796effe9bf4c4e3f9ec9c02a7fa1aafba2f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710794295836234643,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-pn5rb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6de17ae2-1218-40c6-be81-3b1b970505dc,},Annotations:map[string]string{io.kubernetes.container.hash: ec570da2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84847aa229cc7d9680dd7aa174d7b95be3694f484f5a6a148885a3c49fa6c683,PodSandboxId:6e684ddafb12b4e59d55596a65a397811bd735a41a2228e9d3e15956ebfccbcd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710794153375774029,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c02a476d-3fe2-4ed7-8f9f-166a582aa95e,},Annotations:map[string]string{io.kubern
etes.container.hash: 2ca4a9e6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c145eb83d97a54df99463919204e6080ef8813be3527046b772548b8579592e,PodSandboxId:e05dda9f4b926232fee50c5089d25fbe20a6a1ba9a5740340f8e3f31fd522167,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710794144549120616,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dv5fw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1de39436-0b96-4eb9-86be-ee2280b59105,},Annotations:map[string]string{io.kubernetes.container.hash: c0bc048d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667af4104ea3fa883037ccd2a63a33cc84f1cc164fb93eaf5f1c0dc8d7efe7b,PodSandboxId:c35d952b8d84663841ba911efce62e0b576e62718bb812c267d45882c35ba163,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710794113593223965,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-4w54p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: feac56a4-1804-4cbd-b14c-cdc3a1c70b65,},Annotations:map[string]string{io.kubernetes.container.hash: ab93b430,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ae14a206539829b002cd34095ff6612b7b302384f9176a3139671c3eb26a31,PodSandboxId:dba17e07cef8eb2f1c19233cca375ad132dfb7a05fef7d26ff81b9b9c9288de4,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710794030340810984,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s52ss,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2bf5e487-1a92-4a2f-8b5a-43278f1d55c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ff80d6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cec3c0c08a540b2eff603a1eb7bf702675ade7672be27494cbe0579480e2091,PodSandboxId:30cc00415ec47bd084809de98e0b1613f61080406615579f3773d31013f866d3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710794030210804435,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wzp7n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7ec3c789-cb95-4656-97da-67ae6d6e3a33,},Annotations:map[string]string{io.kubernetes.container.hash: f1625612,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be6853713e261b4f7aea473f5b1b14bb6c73630661bd165ea093d276345cb88d,PodSandboxId:9fe325d83a17c898121d55bfee063d90815bbe19dca847b08a676688853f7a53,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710794025101252484,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-ltsjd,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 26c81ded-e590-46e9-8475-2ce1075fe93e,},Annotations:map[string]string{io.kubernetes.container.hash: f5a80d66,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5b1d82d4b0a1ab1555a1e99d99b00b421537c51bec6346fe848190e9a589432,PodSandboxId:8e38044395255a944c95c177eeb26e908f195c28f110e8663a1f66d8d92c10d2,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710794008459243743,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-nvjkr,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 8595c6c7-7fa6-468a-bb41-23e8c560faa3,},Annotations:map[string]string{io.kubernetes.container.hash: c6f703d5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:125c2ea93a71d470fdae82cc4500916c0ab8ac301baabf32c198275710ea0f64,PodSandboxId:313e96ef2d6a9dfed6b007924b8fc48bc1a833e275660373bbd41f0ef45f739f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710793964680187596,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6846633-0977-43e0-bfaf-958353d2befc,},Annotations:map[string]string{io.kubernetes.container.hash: 4257eec0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14,PodSandboxId:5af4f54d2582f3b6fe0923c3972e51a86dbc1f5fb6b7a613ccf8926834e41e1b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d
672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710793958875630340,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-8jcwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c7519d-f00c-4a39-bbc4-fd41f886d578,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac0e26,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746,PodSand
boxId:8fb53df134c3f6b357251ed82a33b9509f7a900d1e73e79ed0c024b55ea68a6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710793957022217472,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4wrfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3b6822-070e-4e10-9964-72c49a522b3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6d102f21,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e,PodSandboxId:02f924df49a73e5d26af6d7f714
148ea7519c533fd35391c81012d172af743e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710793937296313170,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 066529f01bd636633e27126bee27f44b,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738,PodSandboxId:b527a30217428cdff4a442eeaca2a6b160ccbe049754
3a1223db9af641ab001f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710793937298603765,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 325a0c4eb50d55e198f0879629211285,},Annotations:map[string]string{io.kubernetes.container.hash: 2aedd6cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c,PodSandboxId:51eb9ee72fdaeeaef252de3f88b2b23fb516c88359b76e4c18a76ec132cfb25d,Metadata:&ContainerMetadat
a{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710793937225862592,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abff357939d8fb0a363799545324f518,},Annotations:map[string]string{io.kubernetes.container.hash: 6eaf7f56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d,PodSandboxId:de0031a57f9a2ad8bebde7e8dbab1f27b2ce165c4aaa6ef4cf8cba928f1f0047,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710793937184904841,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-791443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47edc23db360d946c8e51334022b218b,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c662a88c-5da2-45d7-8545-4da1a7de96d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df909de911a41       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   058169ced77ee       hello-world-app-5d77478584-pn5rb
	84847aa229cc7       docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba                              2 minutes ago       Running             nginx                     0                   6e684ddafb12b       nginx
	9c145eb83d97a       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   e05dda9f4b926       headlamp-5485c556b-dv5fw
	4667af4104ea3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   c35d952b8d846       gcp-auth-7d69788767-4w54p
	05ae14a206539       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              patch                     0                   dba17e07cef8e       ingress-nginx-admission-patch-s52ss
	8cec3c0c08a54       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              create                    0                   30cc00415ec47       ingress-nginx-admission-create-wzp7n
	be6853713e261       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   9fe325d83a17c       local-path-provisioner-78b46b4d5c-ltsjd
	e5b1d82d4b0a1       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   8e38044395255       yakd-dashboard-9947fc6bf-nvjkr
	125c2ea93a71d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   313e96ef2d6a9       storage-provisioner
	8adeff5af8795       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   5af4f54d2582f       coredns-5dd5756b68-8jcwf
	142d2a698e80e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   8fb53df134c3f       kube-proxy-4wrfg
	d3a79e81e90ff       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             6 minutes ago       Running             etcd                      0                   b527a30217428       etcd-addons-791443
	d8ba76d7778e5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             6 minutes ago       Running             kube-scheduler            0                   02f924df49a73       kube-scheduler-addons-791443
	c2a1ed1a067a7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             6 minutes ago       Running             kube-apiserver            0                   51eb9ee72fdae       kube-apiserver-addons-791443
	cbb108d53ff70       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             6 minutes ago       Running             kube-controller-manager   0                   de0031a57f9a2       kube-controller-manager-addons-791443
	
	
	==> coredns [8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14] <==
	[INFO] 10.244.0.8:43609 - 8445 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043731s
	[INFO] 10.244.0.8:52277 - 10364 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078406s
	[INFO] 10.244.0.8:52277 - 126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048218s
	[INFO] 10.244.0.8:54940 - 12668 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064448s
	[INFO] 10.244.0.8:54940 - 39551 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000025198s
	[INFO] 10.244.0.8:36195 - 10056 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000163727s
	[INFO] 10.244.0.8:36195 - 33366 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000033138s
	[INFO] 10.244.0.8:38049 - 32761 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055455s
	[INFO] 10.244.0.8:38049 - 11750 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000031353s
	[INFO] 10.244.0.8:60527 - 9528 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033004s
	[INFO] 10.244.0.8:60527 - 27706 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000029958s
	[INFO] 10.244.0.8:59418 - 48139 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034929s
	[INFO] 10.244.0.8:59418 - 38921 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00002783s
	[INFO] 10.244.0.8:51422 - 17403 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000032551s
	[INFO] 10.244.0.8:51422 - 39162 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000035562s
	[INFO] 10.244.0.22:46086 - 37205 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000423655s
	[INFO] 10.244.0.22:50855 - 6465 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156085s
	[INFO] 10.244.0.22:45221 - 17715 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089777s
	[INFO] 10.244.0.22:32934 - 45150 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000216118s
	[INFO] 10.244.0.22:50429 - 54039 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088555s
	[INFO] 10.244.0.22:55392 - 59548 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075987s
	[INFO] 10.244.0.22:56868 - 49254 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001225446s
	[INFO] 10.244.0.22:36613 - 23173 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001719236s
	[INFO] 10.244.0.26:52607 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000355297s
	[INFO] 10.244.0.26:59992 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00029172s
	
	
	==> describe nodes <==
	Name:               addons-791443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-791443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=addons-791443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T20_32_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-791443
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:32:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-791443
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:38:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:36:27 +0000   Mon, 18 Mar 2024 20:32:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:36:27 +0000   Mon, 18 Mar 2024 20:32:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:36:27 +0000   Mon, 18 Mar 2024 20:32:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:36:27 +0000   Mon, 18 Mar 2024 20:32:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.131
	  Hostname:    addons-791443
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4359e2ebdc154823b21ce7750f8ea9d3
	  System UUID:                4359e2eb-dc15-4823-b21c-e7750f8ea9d3
	  Boot ID:                    8d380586-c76f-4e12-b676-fc4358cc0ab9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-pn5rb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-7d69788767-4w54p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  headlamp                    headlamp-5485c556b-dv5fw                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-5dd5756b68-8jcwf                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m48s
	  kube-system                 etcd-addons-791443                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-apiserver-addons-791443               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-controller-manager-addons-791443      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-proxy-4wrfg                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 kube-scheduler-addons-791443               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ltsjd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-nvjkr             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m44s                kube-proxy       
	  Normal  Starting                 6m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m7s (x8 over 6m7s)  kubelet          Node addons-791443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x8 over 6m7s)  kubelet          Node addons-791443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x7 over 6m7s)  kubelet          Node addons-791443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m                   kubelet          Node addons-791443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m                   kubelet          Node addons-791443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m                   kubelet          Node addons-791443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m                   kubelet          Node addons-791443 status is now: NodeReady
	  Normal  RegisteredNode           5m49s                node-controller  Node addons-791443 event: Registered Node addons-791443 in Controller
	
	
	==> dmesg <==
	[  +5.056027] kauditd_printk_skb: 82 callbacks suppressed
	[  +5.465198] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.863900] kauditd_printk_skb: 70 callbacks suppressed
	[Mar18 20:33] kauditd_printk_skb: 3 callbacks suppressed
	[ +17.183974] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.412847] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.103588] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.968960] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.001939] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.028476] kauditd_printk_skb: 38 callbacks suppressed
	[Mar18 20:34] kauditd_printk_skb: 32 callbacks suppressed
	[ +43.867303] kauditd_printk_skb: 24 callbacks suppressed
	[Mar18 20:35] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.147752] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.621362] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.012097] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.079640] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.386606] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.441257] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.030456] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.898136] kauditd_printk_skb: 52 callbacks suppressed
	[  +5.553213] kauditd_printk_skb: 17 callbacks suppressed
	[Mar18 20:36] kauditd_printk_skb: 25 callbacks suppressed
	[Mar18 20:38] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.403883] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738] <==
	{"level":"warn","ts":"2024-03-18T20:33:54.266292Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:33:53.901457Z","time spent":"364.831284ms","remote":"127.0.0.1:32918","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13849,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-03-18T20:33:54.266493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.326412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81491"}
	{"level":"warn","ts":"2024-03-18T20:33:54.266994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.364987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T20:33:54.267023Z","caller":"traceutil/trace.go:171","msg":"trace[366299211] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1075; }","duration":"112.396256ms","start":"2024-03-18T20:33:54.154617Z","end":"2024-03-18T20:33:54.267014Z","steps":["trace[366299211] 'agreement among raft nodes before linearized reading'  (duration: 112.349095ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T20:33:54.267276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.485141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10528"}
	{"level":"info","ts":"2024-03-18T20:33:54.267296Z","caller":"traceutil/trace.go:171","msg":"trace[1554669065] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1075; }","duration":"195.509805ms","start":"2024-03-18T20:33:54.071782Z","end":"2024-03-18T20:33:54.267291Z","steps":["trace[1554669065] 'agreement among raft nodes before linearized reading'  (duration: 195.453762ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T20:33:54.266586Z","caller":"traceutil/trace.go:171","msg":"trace[620915901] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1075; }","duration":"320.350066ms","start":"2024-03-18T20:33:53.946158Z","end":"2024-03-18T20:33:54.266508Z","steps":["trace[620915901] 'agreement among raft nodes before linearized reading'  (duration: 320.2428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T20:33:54.267458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:33:53.946146Z","time spent":"321.305076ms","remote":"127.0.0.1:32918","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81514,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-03-18T20:35:06.710981Z","caller":"traceutil/trace.go:171","msg":"trace[51572953] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"302.022292ms","start":"2024-03-18T20:35:06.408874Z","end":"2024-03-18T20:35:06.710896Z","steps":["trace[51572953] 'process raft request'  (duration: 301.929534ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T20:35:06.711119Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:35:06.408836Z","time spent":"302.230582ms","remote":"127.0.0.1:33000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1261 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-03-18T20:35:06.866159Z","caller":"traceutil/trace.go:171","msg":"trace[343893289] linearizableReadLoop","detail":"{readStateIndex:1318; appliedIndex:1317; }","duration":"297.917991ms","start":"2024-03-18T20:35:06.568214Z","end":"2024-03-18T20:35:06.866132Z","steps":["trace[343893289] 'read index received'  (duration: 143.691536ms)","trace[343893289] 'applied index is now lower than readState.Index'  (duration: 154.225821ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T20:35:06.86633Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.139663ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4154"}
	{"level":"info","ts":"2024-03-18T20:35:06.866358Z","caller":"traceutil/trace.go:171","msg":"trace[1556913511] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1270; }","duration":"298.186799ms","start":"2024-03-18T20:35:06.568165Z","end":"2024-03-18T20:35:06.866352Z","steps":["trace[1556913511] 'agreement among raft nodes before linearized reading'  (duration: 298.072256ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T20:35:06.866934Z","caller":"traceutil/trace.go:171","msg":"trace[2107094532] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"453.8012ms","start":"2024-03-18T20:35:06.413117Z","end":"2024-03-18T20:35:06.866918Z","steps":["trace[2107094532] 'process raft request'  (duration: 452.848072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T20:35:06.867054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:35:06.413102Z","time spent":"453.904154ms","remote":"127.0.0.1:33000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-791443\" mod_revision:1255 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-791443\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-791443\" > >"}
	{"level":"info","ts":"2024-03-18T20:35:25.531231Z","caller":"traceutil/trace.go:171","msg":"trace[787661893] transaction","detail":"{read_only:false; response_revision:1384; number_of_response:1; }","duration":"262.398552ms","start":"2024-03-18T20:35:25.268804Z","end":"2024-03-18T20:35:25.531202Z","steps":["trace[787661893] 'process raft request'  (duration: 261.99564ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T20:35:25.53627Z","caller":"traceutil/trace.go:171","msg":"trace[1046380464] transaction","detail":"{read_only:false; response_revision:1385; number_of_response:1; }","duration":"220.795207ms","start":"2024-03-18T20:35:25.315462Z","end":"2024-03-18T20:35:25.536257Z","steps":["trace[1046380464] 'process raft request'  (duration: 220.321069ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T20:35:40.842134Z","caller":"traceutil/trace.go:171","msg":"trace[2094777347] transaction","detail":"{read_only:false; response_revision:1553; number_of_response:1; }","duration":"461.987231ms","start":"2024-03-18T20:35:40.380116Z","end":"2024-03-18T20:35:40.842103Z","steps":["trace[2094777347] 'process raft request'  (duration: 461.81034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T20:35:40.842466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:35:40.380099Z","time spent":"462.10972ms","remote":"127.0.0.1:32898","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1155,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" mod_revision:1532 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" value_size:1094 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" > >"}
	{"level":"info","ts":"2024-03-18T20:35:44.202935Z","caller":"traceutil/trace.go:171","msg":"trace[1487295230] linearizableReadLoop","detail":"{readStateIndex:1659; appliedIndex:1658; }","duration":"189.292434ms","start":"2024-03-18T20:35:44.013628Z","end":"2024-03-18T20:35:44.20292Z","steps":["trace[1487295230] 'read index received'  (duration: 189.123968ms)","trace[1487295230] 'applied index is now lower than readState.Index'  (duration: 167.935µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T20:35:44.203021Z","caller":"traceutil/trace.go:171","msg":"trace[258890776] transaction","detail":"{read_only:false; response_revision:1599; number_of_response:1; }","duration":"260.566745ms","start":"2024-03-18T20:35:43.942436Z","end":"2024-03-18T20:35:44.203003Z","steps":["trace[258890776] 'process raft request'  (duration: 260.353645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T20:35:44.203077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.448797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2998"}
	{"level":"info","ts":"2024-03-18T20:35:44.203099Z","caller":"traceutil/trace.go:171","msg":"trace[208801222] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1599; }","duration":"189.488485ms","start":"2024-03-18T20:35:44.013604Z","end":"2024-03-18T20:35:44.203092Z","steps":["trace[208801222] 'agreement among raft nodes before linearized reading'  (duration: 189.395578ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T20:35:44.369449Z","caller":"traceutil/trace.go:171","msg":"trace[1575336365] transaction","detail":"{read_only:false; response_revision:1600; number_of_response:1; }","duration":"155.513368ms","start":"2024-03-18T20:35:44.213923Z","end":"2024-03-18T20:35:44.369436Z","steps":["trace[1575336365] 'process raft request'  (duration: 154.569849ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T20:36:15.094287Z","caller":"traceutil/trace.go:171","msg":"trace[1610011264] transaction","detail":"{read_only:false; response_revision:1819; number_of_response:1; }","duration":"150.667323ms","start":"2024-03-18T20:36:14.943606Z","end":"2024-03-18T20:36:15.094273Z","steps":["trace[1610011264] 'process raft request'  (duration: 150.525978ms)"],"step_count":1}
	
	
	==> gcp-auth [4667af4104ea3fa883037ccd2a63a33cc84f1cc164fb93eaf5f1c0dc8d7efe7b] <==
	2024/03/18 20:35:13 GCP Auth Webhook started!
	2024/03/18 20:35:14 Ready to marshal response ...
	2024/03/18 20:35:14 Ready to write response ...
	2024/03/18 20:35:14 Ready to marshal response ...
	2024/03/18 20:35:14 Ready to write response ...
	2024/03/18 20:35:19 Ready to marshal response ...
	2024/03/18 20:35:19 Ready to write response ...
	2024/03/18 20:35:25 Ready to marshal response ...
	2024/03/18 20:35:25 Ready to write response ...
	2024/03/18 20:35:32 Ready to marshal response ...
	2024/03/18 20:35:32 Ready to write response ...
	2024/03/18 20:35:33 Ready to marshal response ...
	2024/03/18 20:35:33 Ready to write response ...
	2024/03/18 20:35:34 Ready to marshal response ...
	2024/03/18 20:35:34 Ready to write response ...
	2024/03/18 20:35:34 Ready to marshal response ...
	2024/03/18 20:35:34 Ready to write response ...
	2024/03/18 20:35:34 Ready to marshal response ...
	2024/03/18 20:35:34 Ready to write response ...
	2024/03/18 20:35:41 Ready to marshal response ...
	2024/03/18 20:35:41 Ready to write response ...
	2024/03/18 20:35:48 Ready to marshal response ...
	2024/03/18 20:35:48 Ready to write response ...
	2024/03/18 20:38:12 Ready to marshal response ...
	2024/03/18 20:38:12 Ready to write response ...
	
	
	==> kernel <==
	 20:38:23 up 6 min,  0 users,  load average: 0.25, 1.04, 0.63
	Linux addons-791443 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c] <==
	E0318 20:35:45.706935       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
	I0318 20:35:48.503442       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0318 20:35:48.681636       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.230.191"}
	I0318 20:35:50.540121       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0318 20:36:02.125461       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.126109       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.137148       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.137231       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.164976       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.165118       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.174940       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.175019       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.202449       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.202587       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.218608       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.218664       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.218976       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.219089       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 20:36:02.231653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 20:36:02.231716       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0318 20:36:03.175937       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0318 20:36:03.220136       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0318 20:36:03.247605       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0318 20:38:12.676823       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.66.113"}
	E0318 20:38:15.217816       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d] <==
	W0318 20:37:09.319837       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:37:09.319889       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 20:37:12.523831       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:37:12.523858       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 20:37:29.391301       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:37:29.391428       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 20:37:48.718809       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:37:48.718983       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 20:37:50.755907       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:37:50.755966       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 20:37:51.555664       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:37:51.555730       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 20:38:02.989218       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 20:38:02.989374       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 20:38:12.458049       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0318 20:38:12.500286       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-pn5rb"
	I0318 20:38:12.508139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.695955ms"
	I0318 20:38:12.542674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="34.464477ms"
	I0318 20:38:12.542775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.582µs"
	I0318 20:38:12.542868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.597µs"
	I0318 20:38:14.993207       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0318 20:38:15.000914       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0318 20:38:15.016424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="7.322µs"
	I0318 20:38:16.189256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.789794ms"
	I0318 20:38:16.189910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="92.481µs"
	
	
	==> kube-proxy [142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746] <==
	I0318 20:32:38.704569       1 server_others.go:69] "Using iptables proxy"
	I0318 20:32:38.722821       1 node.go:141] Successfully retrieved node IP: 192.168.39.131
	I0318 20:32:38.827485       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 20:32:38.827504       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 20:32:38.843005       1 server_others.go:152] "Using iptables Proxier"
	I0318 20:32:38.843042       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 20:32:38.843223       1 server.go:846] "Version info" version="v1.28.4"
	I0318 20:32:38.843234       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 20:32:38.843978       1 config.go:188] "Starting service config controller"
	I0318 20:32:38.843996       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 20:32:38.844016       1 config.go:97] "Starting endpoint slice config controller"
	I0318 20:32:38.844019       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 20:32:38.844475       1 config.go:315] "Starting node config controller"
	I0318 20:32:38.844483       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 20:32:38.944325       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 20:32:38.944360       1 shared_informer.go:318] Caches are synced for service config
	I0318 20:32:38.944641       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e] <==
	W0318 20:32:19.847052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 20:32:19.847128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 20:32:20.689867       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 20:32:20.689916       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 20:32:20.705434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 20:32:20.705599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 20:32:20.777668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 20:32:20.778276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 20:32:20.881088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 20:32:20.881174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 20:32:20.908625       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 20:32:20.908745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 20:32:20.952490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 20:32:20.952651       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 20:32:20.962073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 20:32:20.962182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 20:32:21.059748       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 20:32:21.060300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 20:32:21.078901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 20:32:21.078956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 20:32:21.105700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 20:32:21.105750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 20:32:21.105874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 20:32:21.105916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 20:32:23.719062       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 20:38:12 addons-791443 kubelet[1294]: I0318 20:38:12.580186    1294 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6de17ae2-1218-40c6-be81-3b1b970505dc-gcp-creds\") pod \"hello-world-app-5d77478584-pn5rb\" (UID: \"6de17ae2-1218-40c6-be81-3b1b970505dc\") " pod="default/hello-world-app-5d77478584-pn5rb"
	Mar 18 20:38:13 addons-791443 kubelet[1294]: I0318 20:38:13.788296    1294 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6pgz\" (UniqueName: \"kubernetes.io/projected/89b0c3fb-9b61-4bc6-a3a0-01591b00214c-kube-api-access-v6pgz\") pod \"89b0c3fb-9b61-4bc6-a3a0-01591b00214c\" (UID: \"89b0c3fb-9b61-4bc6-a3a0-01591b00214c\") "
	Mar 18 20:38:13 addons-791443 kubelet[1294]: I0318 20:38:13.790959    1294 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b0c3fb-9b61-4bc6-a3a0-01591b00214c-kube-api-access-v6pgz" (OuterVolumeSpecName: "kube-api-access-v6pgz") pod "89b0c3fb-9b61-4bc6-a3a0-01591b00214c" (UID: "89b0c3fb-9b61-4bc6-a3a0-01591b00214c"). InnerVolumeSpecName "kube-api-access-v6pgz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 20:38:13 addons-791443 kubelet[1294]: I0318 20:38:13.888649    1294 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v6pgz\" (UniqueName: \"kubernetes.io/projected/89b0c3fb-9b61-4bc6-a3a0-01591b00214c-kube-api-access-v6pgz\") on node \"addons-791443\" DevicePath \"\""
	Mar 18 20:38:14 addons-791443 kubelet[1294]: I0318 20:38:14.085594    1294 scope.go:117] "RemoveContainer" containerID="e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441"
	Mar 18 20:38:14 addons-791443 kubelet[1294]: I0318 20:38:14.117897    1294 scope.go:117] "RemoveContainer" containerID="e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441"
	Mar 18 20:38:14 addons-791443 kubelet[1294]: E0318 20:38:14.128044    1294 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441\": container with ID starting with e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441 not found: ID does not exist" containerID="e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441"
	Mar 18 20:38:14 addons-791443 kubelet[1294]: I0318 20:38:14.128122    1294 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441"} err="failed to get container status \"e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441\": rpc error: code = NotFound desc = could not find container \"e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441\": container with ID starting with e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441 not found: ID does not exist"
	Mar 18 20:38:15 addons-791443 kubelet[1294]: I0318 20:38:15.077374    1294 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2bf5e487-1a92-4a2f-8b5a-43278f1d55c4" path="/var/lib/kubelet/pods/2bf5e487-1a92-4a2f-8b5a-43278f1d55c4/volumes"
	Mar 18 20:38:15 addons-791443 kubelet[1294]: I0318 20:38:15.078142    1294 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7ec3c789-cb95-4656-97da-67ae6d6e3a33" path="/var/lib/kubelet/pods/7ec3c789-cb95-4656-97da-67ae6d6e3a33/volumes"
	Mar 18 20:38:15 addons-791443 kubelet[1294]: I0318 20:38:15.079214    1294 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="89b0c3fb-9b61-4bc6-a3a0-01591b00214c" path="/var/lib/kubelet/pods/89b0c3fb-9b61-4bc6-a3a0-01591b00214c/volumes"
	Mar 18 20:38:16 addons-791443 kubelet[1294]: I0318 20:38:16.180585    1294 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-pn5rb" podStartSLOduration=1.424167097 podCreationTimestamp="2024-03-18 20:38:12 +0000 UTC" firstStartedPulling="2024-03-18 20:38:13.061451672 +0000 UTC m=+350.165617243" lastFinishedPulling="2024-03-18 20:38:15.817733106 +0000 UTC m=+352.921898678" observedRunningTime="2024-03-18 20:38:16.179646368 +0000 UTC m=+353.283811960" watchObservedRunningTime="2024-03-18 20:38:16.180448532 +0000 UTC m=+353.284614120"
	Mar 18 20:38:18 addons-791443 kubelet[1294]: I0318 20:38:18.322750    1294 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddtk6\" (UniqueName: \"kubernetes.io/projected/b814a9bf-74d7-4610-b54a-94deaf85098a-kube-api-access-ddtk6\") pod \"b814a9bf-74d7-4610-b54a-94deaf85098a\" (UID: \"b814a9bf-74d7-4610-b54a-94deaf85098a\") "
	Mar 18 20:38:18 addons-791443 kubelet[1294]: I0318 20:38:18.322798    1294 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b814a9bf-74d7-4610-b54a-94deaf85098a-webhook-cert\") pod \"b814a9bf-74d7-4610-b54a-94deaf85098a\" (UID: \"b814a9bf-74d7-4610-b54a-94deaf85098a\") "
	Mar 18 20:38:18 addons-791443 kubelet[1294]: I0318 20:38:18.327776    1294 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b814a9bf-74d7-4610-b54a-94deaf85098a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b814a9bf-74d7-4610-b54a-94deaf85098a" (UID: "b814a9bf-74d7-4610-b54a-94deaf85098a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 18 20:38:18 addons-791443 kubelet[1294]: I0318 20:38:18.328320    1294 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b814a9bf-74d7-4610-b54a-94deaf85098a-kube-api-access-ddtk6" (OuterVolumeSpecName: "kube-api-access-ddtk6") pod "b814a9bf-74d7-4610-b54a-94deaf85098a" (UID: "b814a9bf-74d7-4610-b54a-94deaf85098a"). InnerVolumeSpecName "kube-api-access-ddtk6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 20:38:18 addons-791443 kubelet[1294]: I0318 20:38:18.423330    1294 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ddtk6\" (UniqueName: \"kubernetes.io/projected/b814a9bf-74d7-4610-b54a-94deaf85098a-kube-api-access-ddtk6\") on node \"addons-791443\" DevicePath \"\""
	Mar 18 20:38:18 addons-791443 kubelet[1294]: I0318 20:38:18.423358    1294 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b814a9bf-74d7-4610-b54a-94deaf85098a-webhook-cert\") on node \"addons-791443\" DevicePath \"\""
	Mar 18 20:38:19 addons-791443 kubelet[1294]: I0318 20:38:19.044910    1294 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b814a9bf-74d7-4610-b54a-94deaf85098a" path="/var/lib/kubelet/pods/b814a9bf-74d7-4610-b54a-94deaf85098a/volumes"
	Mar 18 20:38:19 addons-791443 kubelet[1294]: I0318 20:38:19.185461    1294 scope.go:117] "RemoveContainer" containerID="f5b193cb0c91ec93c5f4bec2f1b0b59720921d7631c46b4213b7830715586ac5"
	Mar 18 20:38:23 addons-791443 kubelet[1294]: E0318 20:38:23.062689    1294 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:38:23 addons-791443 kubelet[1294]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:38:23 addons-791443 kubelet[1294]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:38:23 addons-791443 kubelet[1294]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:38:23 addons-791443 kubelet[1294]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [125c2ea93a71d470fdae82cc4500916c0ab8ac301baabf32c198275710ea0f64] <==
	I0318 20:32:44.985638       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 20:32:45.163401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 20:32:45.170342       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 20:32:45.231967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 20:32:45.233743       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-791443_cfbb10b1-d7be-4eda-b72e-0d0841af3917!
	I0318 20:32:45.254492       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbbd2a3d-3e8f-4131-a1fa-492060357626", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-791443_cfbb10b1-d7be-4eda-b72e-0d0841af3917 became leader
	I0318 20:32:45.337173       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-791443_cfbb10b1-d7be-4eda-b72e-0d0841af3917!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-791443 -n addons-791443
helpers_test.go:261: (dbg) Run:  kubectl --context addons-791443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.94s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (19.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-791443 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-791443 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9694d686-c832-4a3c-8db2-db9d38df15be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9694d686-c832-4a3c-8db2-db9d38df15be] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9694d686-c832-4a3c-8db2-db9d38df15be] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.003670658s
addons_test.go:891: (dbg) Run:  kubectl --context addons-791443 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 ssh "cat /opt/local-path-provisioner/pvc-227ad095-46e9-4689-9ab7-b7b7ca5fbdc2_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-791443 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-791443 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791443 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (514.998485ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:35:33.055689   14886 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:35:33.055940   14886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:35:33.055949   14886 out.go:304] Setting ErrFile to fd 2...
	I0318 20:35:33.055953   14886 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:35:33.056156   14886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:35:33.056395   14886 mustload.go:65] Loading cluster: addons-791443
	I0318 20:35:33.056695   14886 config.go:182] Loaded profile config "addons-791443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:35:33.056713   14886 addons.go:597] checking whether the cluster is paused
	I0318 20:35:33.056792   14886 config.go:182] Loaded profile config "addons-791443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:35:33.056809   14886 host.go:66] Checking if "addons-791443" exists ...
	I0318 20:35:33.057248   14886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:35:33.057283   14886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:35:33.071434   14886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0318 20:35:33.071857   14886 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:35:33.072366   14886 main.go:141] libmachine: Using API Version  1
	I0318 20:35:33.072397   14886 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:35:33.072713   14886 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:35:33.072928   14886 main.go:141] libmachine: (addons-791443) Calling .GetState
	I0318 20:35:33.074356   14886 main.go:141] libmachine: (addons-791443) Calling .DriverName
	I0318 20:35:33.074570   14886 ssh_runner.go:195] Run: systemctl --version
	I0318 20:35:33.074602   14886 main.go:141] libmachine: (addons-791443) Calling .GetSSHHostname
	I0318 20:35:33.076760   14886 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:35:33.077172   14886 main.go:141] libmachine: (addons-791443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:22:51", ip: ""} in network mk-addons-791443: {Iface:virbr1 ExpiryTime:2024-03-18 21:31:53 +0000 UTC Type:0 Mac:52:54:00:64:22:51 Iaid: IPaddr:192.168.39.131 Prefix:24 Hostname:addons-791443 Clientid:01:52:54:00:64:22:51}
	I0318 20:35:33.077198   14886 main.go:141] libmachine: (addons-791443) DBG | domain addons-791443 has defined IP address 192.168.39.131 and MAC address 52:54:00:64:22:51 in network mk-addons-791443
	I0318 20:35:33.077350   14886 main.go:141] libmachine: (addons-791443) Calling .GetSSHPort
	I0318 20:35:33.077516   14886 main.go:141] libmachine: (addons-791443) Calling .GetSSHKeyPath
	I0318 20:35:33.077674   14886 main.go:141] libmachine: (addons-791443) Calling .GetSSHUsername
	I0318 20:35:33.077805   14886 sshutil.go:53] new ssh client: &{IP:192.168.39.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/addons-791443/id_rsa Username:docker}
	I0318 20:35:33.199224   14886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 20:35:33.199312   14886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 20:35:33.310038   14886 cri.go:89] found id: "483785f08245e2cbc8fd6ff71f031b235ec95172ab42a77494d9de50c1d19e2c"
	I0318 20:35:33.310057   14886 cri.go:89] found id: "419b5e5306f66f32f22b374a77e4bdb2f63345b636d604542efc7c8be659c32d"
	I0318 20:35:33.310061   14886 cri.go:89] found id: "e65c31f780c1f7a2d7de264c08d95d58ae3936880a0bd1e37f3acb6a95c48737"
	I0318 20:35:33.310065   14886 cri.go:89] found id: "1197239047d89bdf0b1abd186b468269dffb88de2a67ccaf8a4ff0bf0a06eb91"
	I0318 20:35:33.310075   14886 cri.go:89] found id: "11d9680a6c032308c7efef078fd393d1a641338eeccee87c98cd18381171da80"
	I0318 20:35:33.310080   14886 cri.go:89] found id: "d4577ff9bfd46131edb77cd315a327c62d13e97fed0e932d4759bf2980b62822"
	I0318 20:35:33.310084   14886 cri.go:89] found id: "c05bd169ff4b664338bfccb5dcb9e0420b754691c260dfba63aa30d7c10308e9"
	I0318 20:35:33.310087   14886 cri.go:89] found id: "881bc3cce616f687883b4dbca9ba710a16b8b1313f44f6603e27485238731f8f"
	I0318 20:35:33.310092   14886 cri.go:89] found id: "689d19f0d8273d11a8d73614f647e10983da3a250917f027762bc67e0366ef45"
	I0318 20:35:33.310099   14886 cri.go:89] found id: "cf44ef4ff63855ff0832965f71247303b6af5fd8e9529b999502057ae27b3102"
	I0318 20:35:33.310103   14886 cri.go:89] found id: "1c6b7ca0d7a5abc3841d819112b809ee8410a62a9c8d0569d2abb42e01023247"
	I0318 20:35:33.310107   14886 cri.go:89] found id: "38cafc6fc1472578b0632fb3f7281e3b05845d9e97dbf8d30bbf0911e5a67ccb"
	I0318 20:35:33.310111   14886 cri.go:89] found id: "7ef5d537630fbd3695ab9076ba149657d0c22579402f83a09f375c0750472459"
	I0318 20:35:33.310115   14886 cri.go:89] found id: "e124b3baaa7112f6cd1cc5c4b0f62652598b0033199b1aa1d5eeaefe21e1a441"
	I0318 20:35:33.310120   14886 cri.go:89] found id: "185c783a56fe1a70e919ced103ae088aa19feb1e15c85acc3b1e920f9e783db4"
	I0318 20:35:33.310125   14886 cri.go:89] found id: "125c2ea93a71d470fdae82cc4500916c0ab8ac301baabf32c198275710ea0f64"
	I0318 20:35:33.310129   14886 cri.go:89] found id: "8adeff5af8795ec8ee6fb9ac1a7ad79ed6692a72ce1f39279afe8cd999cceb14"
	I0318 20:35:33.310134   14886 cri.go:89] found id: "142d2a698e80edbfb41cfc5714471107f65334720f9374a12a7ca943d3ae8746"
	I0318 20:35:33.310139   14886 cri.go:89] found id: "d3a79e81e90fffe0a5fb471327437c828279fbd1a15d2560aea3db3764112738"
	I0318 20:35:33.310141   14886 cri.go:89] found id: "d8ba76d7778e533fca7e585c3ca301e9ba9335e8b176433770633048be31c16e"
	I0318 20:35:33.310144   14886 cri.go:89] found id: "c2a1ed1a067a7169fb336d41e2d94cc6436700c3f2e6f560dbda7c3993353f5c"
	I0318 20:35:33.310146   14886 cri.go:89] found id: "cbb108d53ff70f3de7e26633d8f4ae57edf4f67938a844b950530a9e61af923d"
	I0318 20:35:33.310149   14886 cri.go:89] found id: ""
	I0318 20:35:33.310184   14886 ssh_runner.go:195] Run: sudo runc list -f json
	I0318 20:35:33.507287   14886 main.go:141] libmachine: Making call to close driver server
	I0318 20:35:33.507309   14886 main.go:141] libmachine: (addons-791443) Calling .Close
	I0318 20:35:33.507658   14886 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:35:33.507686   14886 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:35:33.509739   14886 out.go:177] 
	W0318 20:35:33.511051   14886 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T20:35:33Z" level=error msg="stat /run/runc/dee50304ed33e6aa9cd3294a27c3ec855e37831b67c8480c38c3fcb6a65dc614: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-18T20:35:33Z" level=error msg="stat /run/runc/dee50304ed33e6aa9cd3294a27c3ec855e37831b67c8480c38c3fcb6a65dc614: no such file or directory"
	
	W0318 20:35:33.511071   14886 out.go:239] * 
	* 
	W0318 20:35:33.513433   14886 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 20:35:33.515117   14886 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:922: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-791443 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (19.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-791443
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-791443: exit status 82 (2m0.459973541s)

                                                
                                                
-- stdout --
	* Stopping node "addons-791443"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-791443" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-791443
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-791443: exit status 11 (21.473501034s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.131:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-791443" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-791443
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-791443: exit status 11 (6.142394479s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.131:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-791443" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-791443
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-791443: exit status 11 (6.144080301s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.131:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-791443" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 node stop m02 -v=7 --alsologtostderr
E0318 20:53:07.079699   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.48474991s)

                                                
                                                
-- stdout --
	* Stopping node "ha-315064-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:52:32.310529   25646 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:52:32.310819   25646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:52:32.310831   25646 out.go:304] Setting ErrFile to fd 2...
	I0318 20:52:32.310835   25646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:52:32.311518   25646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:52:32.311907   25646 mustload.go:65] Loading cluster: ha-315064
	I0318 20:52:32.312897   25646 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:52:32.312942   25646 stop.go:39] StopHost: ha-315064-m02
	I0318 20:52:32.313379   25646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:52:32.313428   25646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:52:32.329060   25646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0318 20:52:32.329571   25646 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:52:32.330175   25646 main.go:141] libmachine: Using API Version  1
	I0318 20:52:32.330223   25646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:52:32.330591   25646 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:52:32.332821   25646 out.go:177] * Stopping node "ha-315064-m02"  ...
	I0318 20:52:32.334099   25646 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 20:52:32.334123   25646 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:52:32.334363   25646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 20:52:32.334398   25646 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:52:32.337268   25646 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:52:32.337668   25646 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:52:32.337700   25646 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:52:32.337807   25646 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:52:32.337993   25646 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:52:32.338171   25646 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:52:32.338330   25646 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:52:32.428546   25646 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 20:52:32.489355   25646 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 20:52:32.545496   25646 main.go:141] libmachine: Stopping "ha-315064-m02"...
	I0318 20:52:32.545519   25646 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:52:32.547301   25646 main.go:141] libmachine: (ha-315064-m02) Calling .Stop
	I0318 20:52:32.551111   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 0/120
	I0318 20:52:33.552446   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 1/120
	I0318 20:52:34.554001   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 2/120
	I0318 20:52:35.555215   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 3/120
	I0318 20:52:36.556301   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 4/120
	I0318 20:52:37.558095   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 5/120
	I0318 20:52:38.560138   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 6/120
	I0318 20:52:39.561408   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 7/120
	I0318 20:52:40.562744   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 8/120
	I0318 20:52:41.564923   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 9/120
	I0318 20:52:42.567231   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 10/120
	I0318 20:52:43.568733   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 11/120
	I0318 20:52:44.570051   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 12/120
	I0318 20:52:45.571457   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 13/120
	I0318 20:52:46.573116   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 14/120
	I0318 20:52:47.575342   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 15/120
	I0318 20:52:48.576519   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 16/120
	I0318 20:52:49.577878   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 17/120
	I0318 20:52:50.579637   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 18/120
	I0318 20:52:51.581718   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 19/120
	I0318 20:52:52.583573   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 20/120
	I0318 20:52:53.584855   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 21/120
	I0318 20:52:54.586186   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 22/120
	I0318 20:52:55.587422   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 23/120
	I0318 20:52:56.588579   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 24/120
	I0318 20:52:57.589869   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 25/120
	I0318 20:52:58.591296   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 26/120
	I0318 20:52:59.592732   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 27/120
	I0318 20:53:00.594290   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 28/120
	I0318 20:53:01.595594   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 29/120
	I0318 20:53:02.597617   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 30/120
	I0318 20:53:03.599614   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 31/120
	I0318 20:53:04.600806   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 32/120
	I0318 20:53:05.602478   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 33/120
	I0318 20:53:06.604023   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 34/120
	I0318 20:53:07.606286   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 35/120
	I0318 20:53:08.607854   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 36/120
	I0318 20:53:09.609161   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 37/120
	I0318 20:53:10.611361   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 38/120
	I0318 20:53:11.612869   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 39/120
	I0318 20:53:12.615005   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 40/120
	I0318 20:53:13.616573   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 41/120
	I0318 20:53:14.617906   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 42/120
	I0318 20:53:15.619949   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 43/120
	I0318 20:53:16.621252   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 44/120
	I0318 20:53:17.622853   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 45/120
	I0318 20:53:18.624155   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 46/120
	I0318 20:53:19.625513   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 47/120
	I0318 20:53:20.626644   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 48/120
	I0318 20:53:21.627969   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 49/120
	I0318 20:53:22.630026   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 50/120
	I0318 20:53:23.632201   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 51/120
	I0318 20:53:24.633516   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 52/120
	I0318 20:53:25.635835   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 53/120
	I0318 20:53:26.637101   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 54/120
	I0318 20:53:27.638970   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 55/120
	I0318 20:53:28.640255   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 56/120
	I0318 20:53:29.641620   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 57/120
	I0318 20:53:30.643425   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 58/120
	I0318 20:53:31.644812   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 59/120
	I0318 20:53:32.646637   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 60/120
	I0318 20:53:33.648269   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 61/120
	I0318 20:53:34.649798   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 62/120
	I0318 20:53:35.651935   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 63/120
	I0318 20:53:36.653975   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 64/120
	I0318 20:53:37.655853   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 65/120
	I0318 20:53:38.657236   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 66/120
	I0318 20:53:39.658510   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 67/120
	I0318 20:53:40.659907   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 68/120
	I0318 20:53:41.661315   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 69/120
	I0318 20:53:42.663250   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 70/120
	I0318 20:53:43.664640   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 71/120
	I0318 20:53:44.665927   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 72/120
	I0318 20:53:45.667363   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 73/120
	I0318 20:53:46.668627   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 74/120
	I0318 20:53:47.670058   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 75/120
	I0318 20:53:48.671527   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 76/120
	I0318 20:53:49.672826   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 77/120
	I0318 20:53:50.674110   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 78/120
	I0318 20:53:51.675324   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 79/120
	I0318 20:53:52.677265   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 80/120
	I0318 20:53:53.679385   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 81/120
	I0318 20:53:54.680760   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 82/120
	I0318 20:53:55.682331   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 83/120
	I0318 20:53:56.683616   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 84/120
	I0318 20:53:57.685219   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 85/120
	I0318 20:53:58.687376   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 86/120
	I0318 20:53:59.688678   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 87/120
	I0318 20:54:00.690304   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 88/120
	I0318 20:54:01.691650   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 89/120
	I0318 20:54:02.693875   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 90/120
	I0318 20:54:03.695575   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 91/120
	I0318 20:54:04.696940   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 92/120
	I0318 20:54:05.698097   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 93/120
	I0318 20:54:06.699636   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 94/120
	I0318 20:54:07.701553   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 95/120
	I0318 20:54:08.703293   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 96/120
	I0318 20:54:09.705457   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 97/120
	I0318 20:54:10.707261   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 98/120
	I0318 20:54:11.708595   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 99/120
	I0318 20:54:12.710149   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 100/120
	I0318 20:54:13.712307   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 101/120
	I0318 20:54:14.713719   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 102/120
	I0318 20:54:15.715819   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 103/120
	I0318 20:54:16.717064   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 104/120
	I0318 20:54:17.719469   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 105/120
	I0318 20:54:18.721115   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 106/120
	I0318 20:54:19.722376   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 107/120
	I0318 20:54:20.723827   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 108/120
	I0318 20:54:21.725720   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 109/120
	I0318 20:54:22.727296   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 110/120
	I0318 20:54:23.728619   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 111/120
	I0318 20:54:24.729905   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 112/120
	I0318 20:54:25.731777   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 113/120
	I0318 20:54:26.733225   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 114/120
	I0318 20:54:27.734968   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 115/120
	I0318 20:54:28.736199   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 116/120
	I0318 20:54:29.737546   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 117/120
	I0318 20:54:30.738914   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 118/120
	I0318 20:54:31.740307   25646 main.go:141] libmachine: (ha-315064-m02) Waiting for machine to stop 119/120
	I0318 20:54:32.741644   25646 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 20:54:32.741780   25646 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-315064 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (19.055240398s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:54:32.798475   25959 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:54:32.798590   25959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:54:32.798600   25959 out.go:304] Setting ErrFile to fd 2...
	I0318 20:54:32.798604   25959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:54:32.798801   25959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:54:32.798999   25959 out.go:298] Setting JSON to false
	I0318 20:54:32.799025   25959 mustload.go:65] Loading cluster: ha-315064
	I0318 20:54:32.799139   25959 notify.go:220] Checking for updates...
	I0318 20:54:32.799482   25959 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:54:32.799496   25959 status.go:255] checking status of ha-315064 ...
	I0318 20:54:32.799896   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:32.799961   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:32.816560   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41269
	I0318 20:54:32.816991   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:32.817623   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:32.817656   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:32.817991   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:32.818190   25959 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:54:32.819547   25959 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:54:32.819564   25959 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:54:32.819820   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:32.819851   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:32.833685   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0318 20:54:32.834058   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:32.834516   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:32.834540   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:32.834871   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:32.835063   25959 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:54:32.837900   25959 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:32.838328   25959 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:54:32.838362   25959 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:32.838555   25959 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:54:32.838840   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:32.838890   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:32.852702   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0318 20:54:32.853119   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:32.853622   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:32.853653   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:32.854048   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:32.854275   25959 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:54:32.854508   25959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:54:32.854547   25959 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:54:32.857432   25959 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:32.857770   25959 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:54:32.857801   25959 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:32.857957   25959 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:54:32.858277   25959 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:54:32.858440   25959 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:54:32.858577   25959 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:54:32.947486   25959 ssh_runner.go:195] Run: systemctl --version
	I0318 20:54:32.956652   25959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:54:32.977596   25959 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:54:32.977621   25959 api_server.go:166] Checking apiserver status ...
	I0318 20:54:32.977650   25959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:54:32.996892   25959 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:54:33.009937   25959 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:54:33.009994   25959 ssh_runner.go:195] Run: ls
	I0318 20:54:33.015468   25959 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:54:33.023473   25959 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:54:33.023493   25959 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:54:33.023509   25959 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:54:33.023531   25959 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:54:33.023830   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:33.023862   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:33.040085   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I0318 20:54:33.040638   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:33.041256   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:33.041285   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:33.041668   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:33.041913   25959 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:54:33.043652   25959 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:54:33.043671   25959 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:54:33.043993   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:33.044045   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:33.058440   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0318 20:54:33.058809   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:33.059251   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:33.059273   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:33.059532   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:33.059673   25959 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:54:33.062086   25959 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:33.062499   25959 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:54:33.062522   25959 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:33.062641   25959 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:54:33.062918   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:33.062957   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:33.077391   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0318 20:54:33.077843   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:33.078289   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:33.078308   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:33.078650   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:33.078829   25959 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:54:33.079013   25959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:54:33.079034   25959 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:54:33.081601   25959 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:33.082049   25959 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:54:33.082077   25959 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:33.082257   25959 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:54:33.082429   25959 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:54:33.082577   25959 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:54:33.082724   25959 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:54:51.429202   25959 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:54:51.429272   25959 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:54:51.429285   25959 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:54:51.429292   25959 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:54:51.429308   25959 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:54:51.429316   25959 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:54:51.429691   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:51.429752   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:51.444062   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0318 20:54:51.444499   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:51.444942   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:51.444963   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:51.445299   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:51.445511   25959 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:54:51.447201   25959 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:54:51.447217   25959 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:54:51.447496   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:51.447543   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:51.462035   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I0318 20:54:51.462390   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:51.462887   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:51.462904   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:51.463169   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:51.463357   25959 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:54:51.466133   25959 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:54:51.466533   25959 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:54:51.466559   25959 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:54:51.466681   25959 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:54:51.467076   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:51.467117   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:51.480630   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0318 20:54:51.481004   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:51.481472   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:51.481504   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:51.481856   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:51.482047   25959 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:54:51.482223   25959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:54:51.482247   25959 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:54:51.484657   25959 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:54:51.485035   25959 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:54:51.485059   25959 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:54:51.485169   25959 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:54:51.485326   25959 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:54:51.485491   25959 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:54:51.485620   25959 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:54:51.570584   25959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:54:51.590152   25959 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:54:51.590174   25959 api_server.go:166] Checking apiserver status ...
	I0318 20:54:51.590201   25959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:54:51.607018   25959 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:54:51.619862   25959 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:54:51.619917   25959 ssh_runner.go:195] Run: ls
	I0318 20:54:51.627463   25959 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:54:51.632453   25959 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:54:51.632477   25959 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:54:51.632488   25959 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:54:51.632506   25959 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:54:51.632812   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:51.632844   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:51.650050   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0318 20:54:51.650548   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:51.651053   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:51.651074   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:51.651437   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:51.651625   25959 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:54:51.653286   25959 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:54:51.653301   25959 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:54:51.653571   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:51.653604   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:51.667773   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0318 20:54:51.668112   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:51.668541   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:51.668556   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:51.668887   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:51.669139   25959 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:54:51.671732   25959 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:54:51.672140   25959 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:54:51.672178   25959 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:54:51.672294   25959 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:54:51.672562   25959 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:51.672593   25959 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:51.686163   25959 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43859
	I0318 20:54:51.686505   25959 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:51.686872   25959 main.go:141] libmachine: Using API Version  1
	I0318 20:54:51.686889   25959 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:51.687233   25959 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:51.687423   25959 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:54:51.687599   25959 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:54:51.687618   25959 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:54:51.689835   25959 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:54:51.690217   25959 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:54:51.690255   25959 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:54:51.690378   25959 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:54:51.690536   25959 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:54:51.690692   25959 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:54:51.690822   25959 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:54:51.778550   25959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:54:51.796792   25959 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-315064 -n ha-315064
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-315064 logs -n 25: (1.577715165s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064:/home/docker/cp-test_ha-315064-m03_ha-315064.txt                      |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064 sudo cat                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064.txt                                |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m04 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp testdata/cp-test.txt                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064:/home/docker/cp-test_ha-315064-m04_ha-315064.txt                      |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064 sudo cat                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064.txt                                |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03:/home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m03 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-315064 node stop m02 -v=7                                                    | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:46:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:46:21.885782   21691 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:46:21.885913   21691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:46:21.885922   21691 out.go:304] Setting ErrFile to fd 2...
	I0318 20:46:21.885925   21691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:46:21.886118   21691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:46:21.886685   21691 out.go:298] Setting JSON to false
	I0318 20:46:21.887530   21691 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1726,"bootTime":1710793056,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:46:21.887590   21691 start.go:139] virtualization: kvm guest
	I0318 20:46:21.889402   21691 out.go:177] * [ha-315064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:46:21.890735   21691 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:46:21.891888   21691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:46:21.890792   21691 notify.go:220] Checking for updates...
	I0318 20:46:21.894112   21691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:46:21.895264   21691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:46:21.896403   21691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:46:21.897538   21691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:46:21.898928   21691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:46:21.931371   21691 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 20:46:21.932613   21691 start.go:297] selected driver: kvm2
	I0318 20:46:21.932627   21691 start.go:901] validating driver "kvm2" against <nil>
	I0318 20:46:21.932639   21691 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:46:21.933394   21691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:46:21.933464   21691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:46:21.947602   21691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:46:21.947657   21691 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 20:46:21.947851   21691 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:46:21.947906   21691 cni.go:84] Creating CNI manager for ""
	I0318 20:46:21.947917   21691 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 20:46:21.947922   21691 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 20:46:21.947978   21691 start.go:340] cluster config:
	{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0318 20:46:21.948058   21691 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:46:21.949771   21691 out.go:177] * Starting "ha-315064" primary control-plane node in "ha-315064" cluster
	I0318 20:46:21.950997   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:46:21.951024   21691 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:46:21.951030   21691 cache.go:56] Caching tarball of preloaded images
	I0318 20:46:21.951097   21691 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:46:21.951108   21691 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:46:21.951385   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:46:21.951403   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json: {Name:mk3e2c3521eb14f618d4105d084216970f5e6904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:21.951519   21691 start.go:360] acquireMachinesLock for ha-315064: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:46:21.951546   21691 start.go:364] duration metric: took 13.923µs to acquireMachinesLock for "ha-315064"
	I0318 20:46:21.951561   21691 start.go:93] Provisioning new machine with config: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:46:21.951619   21691 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 20:46:21.953226   21691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 20:46:21.953371   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:46:21.953402   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:46:21.966804   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0318 20:46:21.967173   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:46:21.967702   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:46:21.967731   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:46:21.968037   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:46:21.968206   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:21.968333   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:21.968476   21691 start.go:159] libmachine.API.Create for "ha-315064" (driver="kvm2")
	I0318 20:46:21.968502   21691 client.go:168] LocalClient.Create starting
	I0318 20:46:21.968533   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:46:21.968570   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:46:21.968594   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:46:21.968663   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:46:21.968687   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:46:21.968713   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:46:21.968761   21691 main.go:141] libmachine: Running pre-create checks...
	I0318 20:46:21.968775   21691 main.go:141] libmachine: (ha-315064) Calling .PreCreateCheck
	I0318 20:46:21.969084   21691 main.go:141] libmachine: (ha-315064) Calling .GetConfigRaw
	I0318 20:46:21.969413   21691 main.go:141] libmachine: Creating machine...
	I0318 20:46:21.969426   21691 main.go:141] libmachine: (ha-315064) Calling .Create
	I0318 20:46:21.969543   21691 main.go:141] libmachine: (ha-315064) Creating KVM machine...
	I0318 20:46:21.970696   21691 main.go:141] libmachine: (ha-315064) DBG | found existing default KVM network
	I0318 20:46:21.971295   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:21.971171   21714 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012b980}
	I0318 20:46:21.971315   21691 main.go:141] libmachine: (ha-315064) DBG | created network xml: 
	I0318 20:46:21.971327   21691 main.go:141] libmachine: (ha-315064) DBG | <network>
	I0318 20:46:21.971343   21691 main.go:141] libmachine: (ha-315064) DBG |   <name>mk-ha-315064</name>
	I0318 20:46:21.971376   21691 main.go:141] libmachine: (ha-315064) DBG |   <dns enable='no'/>
	I0318 20:46:21.971398   21691 main.go:141] libmachine: (ha-315064) DBG |   
	I0318 20:46:21.971414   21691 main.go:141] libmachine: (ha-315064) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 20:46:21.971426   21691 main.go:141] libmachine: (ha-315064) DBG |     <dhcp>
	I0318 20:46:21.971440   21691 main.go:141] libmachine: (ha-315064) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 20:46:21.971451   21691 main.go:141] libmachine: (ha-315064) DBG |     </dhcp>
	I0318 20:46:21.971461   21691 main.go:141] libmachine: (ha-315064) DBG |   </ip>
	I0318 20:46:21.971477   21691 main.go:141] libmachine: (ha-315064) DBG |   
	I0318 20:46:21.971489   21691 main.go:141] libmachine: (ha-315064) DBG | </network>
	I0318 20:46:21.971500   21691 main.go:141] libmachine: (ha-315064) DBG | 
	I0318 20:46:21.975746   21691 main.go:141] libmachine: (ha-315064) DBG | trying to create private KVM network mk-ha-315064 192.168.39.0/24...
	I0318 20:46:22.036788   21691 main.go:141] libmachine: (ha-315064) DBG | private KVM network mk-ha-315064 192.168.39.0/24 created
	I0318 20:46:22.036812   21691 main.go:141] libmachine: (ha-315064) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064 ...
	I0318 20:46:22.036829   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.036773   21714 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:46:22.036858   21691 main.go:141] libmachine: (ha-315064) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:46:22.036926   21691 main.go:141] libmachine: (ha-315064) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:46:22.262603   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.262489   21714 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa...
	I0318 20:46:22.442782   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.442650   21714 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/ha-315064.rawdisk...
	I0318 20:46:22.442819   21691 main.go:141] libmachine: (ha-315064) DBG | Writing magic tar header
	I0318 20:46:22.442832   21691 main.go:141] libmachine: (ha-315064) DBG | Writing SSH key tar header
	I0318 20:46:22.442848   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.442813   21714 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064 ...
	I0318 20:46:22.442954   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064
	I0318 20:46:22.442984   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:46:22.442994   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064 (perms=drwx------)
	I0318 20:46:22.443010   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:46:22.443026   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:46:22.443035   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:46:22.443050   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:46:22.443059   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:46:22.443071   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:46:22.443089   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:46:22.443100   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home
	I0318 20:46:22.443111   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:46:22.443126   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:46:22.443136   21691 main.go:141] libmachine: (ha-315064) Creating domain...
	I0318 20:46:22.443145   21691 main.go:141] libmachine: (ha-315064) DBG | Skipping /home - not owner
	I0318 20:46:22.444100   21691 main.go:141] libmachine: (ha-315064) define libvirt domain using xml: 
	I0318 20:46:22.444116   21691 main.go:141] libmachine: (ha-315064) <domain type='kvm'>
	I0318 20:46:22.444122   21691 main.go:141] libmachine: (ha-315064)   <name>ha-315064</name>
	I0318 20:46:22.444130   21691 main.go:141] libmachine: (ha-315064)   <memory unit='MiB'>2200</memory>
	I0318 20:46:22.444136   21691 main.go:141] libmachine: (ha-315064)   <vcpu>2</vcpu>
	I0318 20:46:22.444140   21691 main.go:141] libmachine: (ha-315064)   <features>
	I0318 20:46:22.444145   21691 main.go:141] libmachine: (ha-315064)     <acpi/>
	I0318 20:46:22.444149   21691 main.go:141] libmachine: (ha-315064)     <apic/>
	I0318 20:46:22.444155   21691 main.go:141] libmachine: (ha-315064)     <pae/>
	I0318 20:46:22.444161   21691 main.go:141] libmachine: (ha-315064)     
	I0318 20:46:22.444168   21691 main.go:141] libmachine: (ha-315064)   </features>
	I0318 20:46:22.444178   21691 main.go:141] libmachine: (ha-315064)   <cpu mode='host-passthrough'>
	I0318 20:46:22.444187   21691 main.go:141] libmachine: (ha-315064)   
	I0318 20:46:22.444193   21691 main.go:141] libmachine: (ha-315064)   </cpu>
	I0318 20:46:22.444216   21691 main.go:141] libmachine: (ha-315064)   <os>
	I0318 20:46:22.444233   21691 main.go:141] libmachine: (ha-315064)     <type>hvm</type>
	I0318 20:46:22.444243   21691 main.go:141] libmachine: (ha-315064)     <boot dev='cdrom'/>
	I0318 20:46:22.444254   21691 main.go:141] libmachine: (ha-315064)     <boot dev='hd'/>
	I0318 20:46:22.444264   21691 main.go:141] libmachine: (ha-315064)     <bootmenu enable='no'/>
	I0318 20:46:22.444273   21691 main.go:141] libmachine: (ha-315064)   </os>
	I0318 20:46:22.444289   21691 main.go:141] libmachine: (ha-315064)   <devices>
	I0318 20:46:22.444306   21691 main.go:141] libmachine: (ha-315064)     <disk type='file' device='cdrom'>
	I0318 20:46:22.444331   21691 main.go:141] libmachine: (ha-315064)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/boot2docker.iso'/>
	I0318 20:46:22.444342   21691 main.go:141] libmachine: (ha-315064)       <target dev='hdc' bus='scsi'/>
	I0318 20:46:22.444357   21691 main.go:141] libmachine: (ha-315064)       <readonly/>
	I0318 20:46:22.444368   21691 main.go:141] libmachine: (ha-315064)     </disk>
	I0318 20:46:22.444386   21691 main.go:141] libmachine: (ha-315064)     <disk type='file' device='disk'>
	I0318 20:46:22.444403   21691 main.go:141] libmachine: (ha-315064)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:46:22.444416   21691 main.go:141] libmachine: (ha-315064)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/ha-315064.rawdisk'/>
	I0318 20:46:22.444424   21691 main.go:141] libmachine: (ha-315064)       <target dev='hda' bus='virtio'/>
	I0318 20:46:22.444436   21691 main.go:141] libmachine: (ha-315064)     </disk>
	I0318 20:46:22.444447   21691 main.go:141] libmachine: (ha-315064)     <interface type='network'>
	I0318 20:46:22.444460   21691 main.go:141] libmachine: (ha-315064)       <source network='mk-ha-315064'/>
	I0318 20:46:22.444471   21691 main.go:141] libmachine: (ha-315064)       <model type='virtio'/>
	I0318 20:46:22.444502   21691 main.go:141] libmachine: (ha-315064)     </interface>
	I0318 20:46:22.444523   21691 main.go:141] libmachine: (ha-315064)     <interface type='network'>
	I0318 20:46:22.444535   21691 main.go:141] libmachine: (ha-315064)       <source network='default'/>
	I0318 20:46:22.444546   21691 main.go:141] libmachine: (ha-315064)       <model type='virtio'/>
	I0318 20:46:22.444559   21691 main.go:141] libmachine: (ha-315064)     </interface>
	I0318 20:46:22.444570   21691 main.go:141] libmachine: (ha-315064)     <serial type='pty'>
	I0318 20:46:22.444584   21691 main.go:141] libmachine: (ha-315064)       <target port='0'/>
	I0318 20:46:22.444593   21691 main.go:141] libmachine: (ha-315064)     </serial>
	I0318 20:46:22.444610   21691 main.go:141] libmachine: (ha-315064)     <console type='pty'>
	I0318 20:46:22.444627   21691 main.go:141] libmachine: (ha-315064)       <target type='serial' port='0'/>
	I0318 20:46:22.444642   21691 main.go:141] libmachine: (ha-315064)     </console>
	I0318 20:46:22.444652   21691 main.go:141] libmachine: (ha-315064)     <rng model='virtio'>
	I0318 20:46:22.444662   21691 main.go:141] libmachine: (ha-315064)       <backend model='random'>/dev/random</backend>
	I0318 20:46:22.444671   21691 main.go:141] libmachine: (ha-315064)     </rng>
	I0318 20:46:22.444678   21691 main.go:141] libmachine: (ha-315064)     
	I0318 20:46:22.444691   21691 main.go:141] libmachine: (ha-315064)     
	I0318 20:46:22.444702   21691 main.go:141] libmachine: (ha-315064)   </devices>
	I0318 20:46:22.444711   21691 main.go:141] libmachine: (ha-315064) </domain>
	I0318 20:46:22.444725   21691 main.go:141] libmachine: (ha-315064) 
	I0318 20:46:22.448616   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:4b:27:78 in network default
	I0318 20:46:22.449166   21691 main.go:141] libmachine: (ha-315064) Ensuring networks are active...
	I0318 20:46:22.449188   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:22.449975   21691 main.go:141] libmachine: (ha-315064) Ensuring network default is active
	I0318 20:46:22.450274   21691 main.go:141] libmachine: (ha-315064) Ensuring network mk-ha-315064 is active
	I0318 20:46:22.450831   21691 main.go:141] libmachine: (ha-315064) Getting domain xml...
	I0318 20:46:22.451526   21691 main.go:141] libmachine: (ha-315064) Creating domain...
	I0318 20:46:23.593589   21691 main.go:141] libmachine: (ha-315064) Waiting to get IP...
	I0318 20:46:23.594447   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:23.594836   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:23.594866   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:23.594820   21714 retry.go:31] will retry after 274.347043ms: waiting for machine to come up
	I0318 20:46:23.870347   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:23.870700   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:23.870726   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:23.870671   21714 retry.go:31] will retry after 265.423423ms: waiting for machine to come up
	I0318 20:46:24.137991   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:24.138421   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:24.138448   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:24.138369   21714 retry.go:31] will retry after 324.361893ms: waiting for machine to come up
	I0318 20:46:24.463757   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:24.464171   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:24.464194   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:24.464121   21714 retry.go:31] will retry after 485.166496ms: waiting for machine to come up
	I0318 20:46:24.950536   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:24.950954   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:24.950988   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:24.950924   21714 retry.go:31] will retry after 659.735908ms: waiting for machine to come up
	I0318 20:46:25.612625   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:25.612956   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:25.613002   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:25.612927   21714 retry.go:31] will retry after 577.777037ms: waiting for machine to come up
	I0318 20:46:26.192551   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:26.193016   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:26.193054   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:26.192965   21714 retry.go:31] will retry after 916.92507ms: waiting for machine to come up
	I0318 20:46:27.111346   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:27.111701   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:27.111730   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:27.111650   21714 retry.go:31] will retry after 1.061259623s: waiting for machine to come up
	I0318 20:46:28.174803   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:28.175229   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:28.175252   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:28.175187   21714 retry.go:31] will retry after 1.287700397s: waiting for machine to come up
	I0318 20:46:29.464552   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:29.464939   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:29.464968   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:29.464879   21714 retry.go:31] will retry after 2.206310176s: waiting for machine to come up
	I0318 20:46:31.674070   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:31.674452   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:31.674482   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:31.674405   21714 retry.go:31] will retry after 2.003425876s: waiting for machine to come up
	I0318 20:46:33.678856   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:33.679288   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:33.679316   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:33.679243   21714 retry.go:31] will retry after 3.186798927s: waiting for machine to come up
	I0318 20:46:36.869459   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:36.869755   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:36.869785   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:36.869738   21714 retry.go:31] will retry after 2.922529074s: waiting for machine to come up
	I0318 20:46:39.795981   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:39.796448   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:39.796471   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:39.796409   21714 retry.go:31] will retry after 4.959899587s: waiting for machine to come up
	I0318 20:46:44.759102   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.759533   21691 main.go:141] libmachine: (ha-315064) Found IP for machine: 192.168.39.79
	I0318 20:46:44.759559   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has current primary IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.759569   21691 main.go:141] libmachine: (ha-315064) Reserving static IP address...
	I0318 20:46:44.759888   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find host DHCP lease matching {name: "ha-315064", mac: "52:54:00:3e:a5:8a", ip: "192.168.39.79"} in network mk-ha-315064
	I0318 20:46:44.826952   21691 main.go:141] libmachine: (ha-315064) DBG | Getting to WaitForSSH function...
	I0318 20:46:44.826984   21691 main.go:141] libmachine: (ha-315064) Reserved static IP address: 192.168.39.79
	I0318 20:46:44.826996   21691 main.go:141] libmachine: (ha-315064) Waiting for SSH to be available...
	I0318 20:46:44.829203   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.829555   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:44.829582   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.829715   21691 main.go:141] libmachine: (ha-315064) DBG | Using SSH client type: external
	I0318 20:46:44.829751   21691 main.go:141] libmachine: (ha-315064) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa (-rw-------)
	I0318 20:46:44.829784   21691 main.go:141] libmachine: (ha-315064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:46:44.829794   21691 main.go:141] libmachine: (ha-315064) DBG | About to run SSH command:
	I0318 20:46:44.829819   21691 main.go:141] libmachine: (ha-315064) DBG | exit 0
	I0318 20:46:44.952791   21691 main.go:141] libmachine: (ha-315064) DBG | SSH cmd err, output: <nil>: 
	I0318 20:46:44.953132   21691 main.go:141] libmachine: (ha-315064) KVM machine creation complete!
	I0318 20:46:44.953477   21691 main.go:141] libmachine: (ha-315064) Calling .GetConfigRaw
	I0318 20:46:44.953937   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:44.954148   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:44.954308   21691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:46:44.954323   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:46:44.955330   21691 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:46:44.955344   21691 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:46:44.955352   21691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:46:44.955362   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:44.957299   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.957630   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:44.957659   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.957763   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:44.957960   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:44.958090   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:44.958228   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:44.958377   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:44.958539   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:44.958548   21691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:46:45.060021   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:46:45.060046   21691 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:46:45.060056   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.062452   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.062754   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.062786   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.062908   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.063066   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.063194   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.063374   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.063511   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.063700   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.063710   21691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:46:45.173941   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:46:45.174035   21691 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:46:45.174052   21691 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:46:45.174064   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:45.174329   21691 buildroot.go:166] provisioning hostname "ha-315064"
	I0318 20:46:45.174358   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:45.174550   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.176920   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.177246   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.177265   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.177395   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.177559   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.177704   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.177840   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.177970   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.178138   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.178149   21691 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064 && echo "ha-315064" | sudo tee /etc/hostname
	I0318 20:46:45.296459   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064
	
	I0318 20:46:45.296495   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.299139   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.299483   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.299531   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.299693   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.299880   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.300032   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.300156   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.300381   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.300532   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.300553   21691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:46:45.417717   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:46:45.417741   21691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:46:45.417776   21691 buildroot.go:174] setting up certificates
	I0318 20:46:45.417788   21691 provision.go:84] configureAuth start
	I0318 20:46:45.417807   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:45.418149   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:45.420583   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.420893   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.420936   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.421042   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.423034   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.423342   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.423358   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.423476   21691 provision.go:143] copyHostCerts
	I0318 20:46:45.423504   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:46:45.423542   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:46:45.423552   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:46:45.423616   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:46:45.423715   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:46:45.423736   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:46:45.423743   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:46:45.423768   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:46:45.423821   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:46:45.423836   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:46:45.423843   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:46:45.423868   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:46:45.423925   21691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064 san=[127.0.0.1 192.168.39.79 ha-315064 localhost minikube]
	I0318 20:46:45.605107   21691 provision.go:177] copyRemoteCerts
	I0318 20:46:45.605174   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:46:45.605197   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.607728   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.608000   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.608024   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.608171   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.608342   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.608472   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.608605   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:45.691465   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:46:45.691537   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 20:46:45.718048   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:46:45.718104   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:46:45.744716   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:46:45.744773   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 20:46:45.771474   21691 provision.go:87] duration metric: took 353.673873ms to configureAuth
	I0318 20:46:45.771509   21691 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:46:45.771731   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:46:45.771821   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.774441   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.774759   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.774786   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.774916   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.775052   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.775211   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.775344   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.775465   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.775609   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.775624   21691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:46:46.049303   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:46:46.049340   21691 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:46:46.049351   21691 main.go:141] libmachine: (ha-315064) Calling .GetURL
	I0318 20:46:46.050640   21691 main.go:141] libmachine: (ha-315064) DBG | Using libvirt version 6000000
	I0318 20:46:46.052726   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.053047   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.053075   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.053197   21691 main.go:141] libmachine: Docker is up and running!
	I0318 20:46:46.053210   21691 main.go:141] libmachine: Reticulating splines...
	I0318 20:46:46.053218   21691 client.go:171] duration metric: took 24.084704977s to LocalClient.Create
	I0318 20:46:46.053242   21691 start.go:167] duration metric: took 24.084766408s to libmachine.API.Create "ha-315064"
	I0318 20:46:46.053254   21691 start.go:293] postStartSetup for "ha-315064" (driver="kvm2")
	I0318 20:46:46.053267   21691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:46:46.053289   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.053490   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:46:46.053513   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.055539   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.055891   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.055917   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.056065   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.056248   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.056380   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.056534   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:46.142462   21691 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:46:46.147241   21691 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:46:46.147262   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:46:46.147313   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:46:46.147388   21691 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:46:46.147398   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:46:46.147489   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:46:46.159746   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:46:46.186951   21691 start.go:296] duration metric: took 133.684985ms for postStartSetup
	I0318 20:46:46.186996   21691 main.go:141] libmachine: (ha-315064) Calling .GetConfigRaw
	I0318 20:46:46.187537   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:46.189957   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.190310   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.190339   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.190568   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:46:46.190744   21691 start.go:128] duration metric: took 24.239116546s to createHost
	I0318 20:46:46.190766   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.193015   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.193337   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.193361   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.193499   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.193701   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.193865   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.193997   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.194144   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:46.194299   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:46.194315   21691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:46:46.297962   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794806.283404586
	
	I0318 20:46:46.297988   21691 fix.go:216] guest clock: 1710794806.283404586
	I0318 20:46:46.297997   21691 fix.go:229] Guest: 2024-03-18 20:46:46.283404586 +0000 UTC Remote: 2024-03-18 20:46:46.190756996 +0000 UTC m=+24.350032451 (delta=92.64759ms)
	I0318 20:46:46.298014   21691 fix.go:200] guest clock delta is within tolerance: 92.64759ms
	I0318 20:46:46.298020   21691 start.go:83] releasing machines lock for "ha-315064", held for 24.346466173s
	I0318 20:46:46.298035   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.298246   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:46.300628   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.300995   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.301026   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.301199   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.301635   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.301795   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.301893   21691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:46:46.301930   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.301996   21691 ssh_runner.go:195] Run: cat /version.json
	I0318 20:46:46.302026   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.304410   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.304681   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.304705   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.304727   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.304836   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.304994   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.305129   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.305166   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.305193   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.305282   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:46.305358   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.305511   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.305652   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.305823   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:46.389952   21691 ssh_runner.go:195] Run: systemctl --version
	I0318 20:46:46.412414   21691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:46:46.585196   21691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:46:46.591427   21691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:46:46.591487   21691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:46:46.608427   21691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:46:46.608447   21691 start.go:494] detecting cgroup driver to use...
	I0318 20:46:46.608509   21691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:46:46.626684   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:46:46.641728   21691 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:46:46.641789   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:46:46.657059   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:46:46.671879   21691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:46:46.788408   21691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:46:46.934253   21691 docker.go:233] disabling docker service ...
	I0318 20:46:46.934319   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:46:46.950155   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:46:46.964081   21691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:46:47.099604   21691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:46:47.239156   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:46:47.254226   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:46:47.274183   21691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:46:47.274236   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.286187   21691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:46:47.286230   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.298347   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.310240   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.321968   21691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:46:47.334039   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.345756   21691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.363876   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.375577   21691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:46:47.386154   21691 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:46:47.386204   21691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:46:47.401415   21691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:46:47.412549   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:46:47.551080   21691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:46:47.687299   21691 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:46:47.687377   21691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:46:47.693010   21691 start.go:562] Will wait 60s for crictl version
	I0318 20:46:47.693057   21691 ssh_runner.go:195] Run: which crictl
	I0318 20:46:47.697174   21691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:46:47.737039   21691 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:46:47.737127   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:46:47.766077   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:46:47.796915   21691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:46:47.798150   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:47.800442   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:47.800745   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:47.800774   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:47.800978   21691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:46:47.805458   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:46:47.820533   21691 kubeadm.go:877] updating cluster {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 20:46:47.820624   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:46:47.820658   21691 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:46:47.854790   21691 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 20:46:47.854855   21691 ssh_runner.go:195] Run: which lz4
	I0318 20:46:47.859097   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 20:46:47.859192   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 20:46:47.863620   21691 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 20:46:47.863640   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 20:46:49.661674   21691 crio.go:462] duration metric: took 1.802494227s to copy over tarball
	I0318 20:46:49.661746   21691 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 20:46:52.299139   21691 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.637357249s)
	I0318 20:46:52.299169   21691 crio.go:469] duration metric: took 2.637464587s to extract the tarball
	I0318 20:46:52.299177   21691 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 20:46:52.342658   21691 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:46:52.390732   21691 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:46:52.390750   21691 cache_images.go:84] Images are preloaded, skipping loading
	I0318 20:46:52.390757   21691 kubeadm.go:928] updating node { 192.168.39.79 8443 v1.28.4 crio true true} ...
	I0318 20:46:52.390891   21691 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:46:52.390975   21691 ssh_runner.go:195] Run: crio config
	I0318 20:46:52.438327   21691 cni.go:84] Creating CNI manager for ""
	I0318 20:46:52.438349   21691 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 20:46:52.438365   21691 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 20:46:52.438389   21691 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-315064 NodeName:ha-315064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 20:46:52.438523   21691 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-315064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 20:46:52.438549   21691 kube-vip.go:111] generating kube-vip config ...
	I0318 20:46:52.438584   21691 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:46:52.458947   21691 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:46:52.459061   21691 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:46:52.459126   21691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:46:52.471105   21691 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 20:46:52.471162   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 20:46:52.482670   21691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 20:46:52.501751   21691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:46:52.520325   21691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 20:46:52.539485   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:46:52.558348   21691 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:46:52.563071   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:46:52.577845   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:46:52.702982   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:46:52.720532   21691 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.79
	I0318 20:46:52.720549   21691 certs.go:194] generating shared ca certs ...
	I0318 20:46:52.720566   21691 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.720705   21691 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:46:52.720755   21691 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:46:52.720768   21691 certs.go:256] generating profile certs ...
	I0318 20:46:52.720833   21691 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:46:52.720853   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt with IP's: []
	I0318 20:46:52.846655   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt ...
	I0318 20:46:52.846706   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt: {Name:mkfbf0e8628dd07990bd6fe2635e15f4b1d135fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.847077   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key ...
	I0318 20:46:52.847109   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key: {Name:mk029b3c519fd721ceecf06ae82b3034b3d72595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.847294   21691 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85
	I0318 20:46:52.847316   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.254]
	I0318 20:46:52.972176   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85 ...
	I0318 20:46:52.972206   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85: {Name:mk809a3d998afad1344d1912954543bd78b5687c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.972348   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85 ...
	I0318 20:46:52.972367   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85: {Name:mk2b16960466efe924cbf02c221964fe69ab0498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.972436   21691 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:46:52.972520   21691 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:46:52.972572   21691 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:46:52.972586   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt with IP's: []
	I0318 20:46:53.030704   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt ...
	I0318 20:46:53.030728   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt: {Name:mkb1c6c4fc166282744b97f277714d12fbf364d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:53.030867   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key ...
	I0318 20:46:53.030877   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key: {Name:mka950c2802fbd336e6077e24c694131bb322466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:53.030942   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:46:53.030957   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:46:53.030967   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:46:53.030978   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:46:53.030990   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:46:53.031000   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:46:53.031013   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:46:53.031022   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:46:53.031064   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:46:53.031096   21691 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:46:53.031105   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:46:53.031128   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:46:53.031151   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:46:53.031171   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:46:53.031205   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:46:53.031229   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.031246   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.031263   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.031808   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:46:53.060127   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:46:53.085858   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:46:53.111659   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:46:53.137876   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 20:46:53.163264   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:46:53.190323   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:46:53.216299   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:46:53.242055   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:46:53.267645   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:46:53.293200   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:46:53.319883   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 20:46:53.338442   21691 ssh_runner.go:195] Run: openssl version
	I0318 20:46:53.344723   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:46:53.357314   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.362409   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.362470   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.368960   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:46:53.381477   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:46:53.394084   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.399242   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.399296   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.405646   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:46:53.418411   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:46:53.431704   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.436893   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.436942   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.443434   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:46:53.456139   21691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:46:53.460939   21691 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:46:53.460997   21691 kubeadm.go:391] StartCluster: {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:46:53.461098   21691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 20:46:53.461158   21691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 20:46:53.506898   21691 cri.go:89] found id: ""
	I0318 20:46:53.506965   21691 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 20:46:53.518555   21691 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 20:46:53.532584   21691 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 20:46:53.556262   21691 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 20:46:53.556278   21691 kubeadm.go:156] found existing configuration files:
	
	I0318 20:46:53.556318   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 20:46:53.571812   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 20:46:53.571852   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 20:46:53.589112   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 20:46:53.599484   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 20:46:53.599540   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 20:46:53.617233   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 20:46:53.628061   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 20:46:53.628107   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 20:46:53.639263   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 20:46:53.650052   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 20:46:53.650129   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 20:46:53.661021   21691 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 20:46:53.764528   21691 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 20:46:53.764639   21691 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 20:46:53.910044   21691 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 20:46:53.910185   21691 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 20:46:53.910334   21691 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 20:46:54.139369   21691 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 20:46:54.356012   21691 out.go:204]   - Generating certificates and keys ...
	I0318 20:46:54.356140   21691 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 20:46:54.356261   21691 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 20:46:54.369747   21691 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 20:46:54.746613   21691 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 20:46:54.864651   21691 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 20:46:55.040387   21691 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 20:46:55.150803   21691 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 20:46:55.150976   21691 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-315064 localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
	I0318 20:46:55.258885   21691 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 20:46:55.259021   21691 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-315064 localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
	I0318 20:46:55.331196   21691 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 20:46:55.461613   21691 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 20:46:55.555510   21691 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 20:46:55.555822   21691 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 20:46:55.869968   21691 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 20:46:56.095783   21691 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 20:46:56.334938   21691 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 20:46:56.413457   21691 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 20:46:56.414146   21691 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 20:46:56.417095   21691 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 20:46:56.419100   21691 out.go:204]   - Booting up control plane ...
	I0318 20:46:56.419226   21691 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 20:46:56.419319   21691 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 20:46:56.419390   21691 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 20:46:56.437328   21691 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 20:46:56.438508   21691 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 20:46:56.439122   21691 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 20:46:56.574190   21691 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 20:47:06.192480   21691 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.619608 seconds
	I0318 20:47:06.192620   21691 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 20:47:06.206609   21691 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 20:47:06.741282   21691 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 20:47:06.741487   21691 kubeadm.go:309] [mark-control-plane] Marking the node ha-315064 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 20:47:07.261400   21691 kubeadm.go:309] [bootstrap-token] Using token: 1buc55.ep1i46vz8cpac7up
	I0318 20:47:07.263020   21691 out.go:204]   - Configuring RBAC rules ...
	I0318 20:47:07.263160   21691 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 20:47:07.270124   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 20:47:07.279342   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 20:47:07.283192   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 20:47:07.288935   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 20:47:07.298117   21691 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 20:47:07.311963   21691 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 20:47:07.584725   21691 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 20:47:07.677680   21691 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 20:47:07.678413   21691 kubeadm.go:309] 
	I0318 20:47:07.678517   21691 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 20:47:07.678560   21691 kubeadm.go:309] 
	I0318 20:47:07.678648   21691 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 20:47:07.678660   21691 kubeadm.go:309] 
	I0318 20:47:07.678694   21691 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 20:47:07.678751   21691 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 20:47:07.678818   21691 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 20:47:07.678827   21691 kubeadm.go:309] 
	I0318 20:47:07.678914   21691 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 20:47:07.678937   21691 kubeadm.go:309] 
	I0318 20:47:07.679017   21691 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 20:47:07.679030   21691 kubeadm.go:309] 
	I0318 20:47:07.679108   21691 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 20:47:07.679206   21691 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 20:47:07.679298   21691 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 20:47:07.679324   21691 kubeadm.go:309] 
	I0318 20:47:07.679444   21691 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 20:47:07.679545   21691 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 20:47:07.679561   21691 kubeadm.go:309] 
	I0318 20:47:07.679672   21691 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1buc55.ep1i46vz8cpac7up \
	I0318 20:47:07.679819   21691 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 20:47:07.679868   21691 kubeadm.go:309] 	--control-plane 
	I0318 20:47:07.679881   21691 kubeadm.go:309] 
	I0318 20:47:07.680000   21691 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 20:47:07.680012   21691 kubeadm.go:309] 
	I0318 20:47:07.680110   21691 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1buc55.ep1i46vz8cpac7up \
	I0318 20:47:07.680257   21691 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 20:47:07.680922   21691 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 20:47:07.680953   21691 cni.go:84] Creating CNI manager for ""
	I0318 20:47:07.680965   21691 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 20:47:07.682696   21691 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 20:47:07.684140   21691 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 20:47:07.693797   21691 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 20:47:07.693816   21691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 20:47:07.734800   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 20:47:08.789983   21691 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.055150964s)
	I0318 20:47:08.790026   21691 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 20:47:08.790117   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:08.790165   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-315064 minikube.k8s.io/updated_at=2024_03_18T20_47_08_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=ha-315064 minikube.k8s.io/primary=true
	I0318 20:47:08.807213   21691 ops.go:34] apiserver oom_adj: -16
	I0318 20:47:09.014164   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:09.514197   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:10.014959   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:10.514611   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:11.015108   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:11.515168   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:12.015241   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:12.514751   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:13.014995   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:13.515000   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:14.014990   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:14.514197   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:15.014307   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:15.514814   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:16.014503   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:16.514710   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:17.014637   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:17.514453   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:17.733344   21691 kubeadm.go:1107] duration metric: took 8.943275498s to wait for elevateKubeSystemPrivileges
	W0318 20:47:17.733392   21691 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 20:47:17.733401   21691 kubeadm.go:393] duration metric: took 24.272407947s to StartCluster
	I0318 20:47:17.733421   21691 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:17.733507   21691 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:47:17.734420   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:17.734673   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 20:47:17.734693   21691 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:47:17.734725   21691 start.go:240] waiting for startup goroutines ...
	I0318 20:47:17.734737   21691 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 20:47:17.734808   21691 addons.go:69] Setting storage-provisioner=true in profile "ha-315064"
	I0318 20:47:17.734828   21691 addons.go:69] Setting default-storageclass=true in profile "ha-315064"
	I0318 20:47:17.734838   21691 addons.go:234] Setting addon storage-provisioner=true in "ha-315064"
	I0318 20:47:17.734859   21691 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-315064"
	I0318 20:47:17.734868   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:47:17.734897   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:17.735212   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.735255   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.735285   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.735314   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.750053   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0318 20:47:17.750332   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0318 20:47:17.750493   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.750741   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.750946   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.750969   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.751197   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.751220   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.751283   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.751470   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:17.751527   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.752074   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.752108   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.753544   21691 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:47:17.753766   21691 kapi.go:59] client config for ha-315064: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 20:47:17.754560   21691 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 20:47:17.754685   21691 addons.go:234] Setting addon default-storageclass=true in "ha-315064"
	I0318 20:47:17.754725   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:47:17.755080   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.755108   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.766882   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0318 20:47:17.767291   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.767788   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.767816   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.768103   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.768307   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:17.768629   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0318 20:47:17.769078   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.769721   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.769742   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.770062   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:47:17.770234   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.771912   21691 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 20:47:17.770730   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.773321   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.773434   21691 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 20:47:17.773455   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 20:47:17.773475   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:47:17.776104   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.776481   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:47:17.776500   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.776723   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:47:17.776886   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:47:17.777054   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:47:17.777214   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:47:17.787628   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0318 20:47:17.788005   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.788403   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.788427   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.788766   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.788949   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:17.790234   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:47:17.790475   21691 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 20:47:17.790493   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 20:47:17.790510   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:47:17.792951   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.793336   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:47:17.793362   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.793470   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:47:17.793629   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:47:17.793778   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:47:17.793916   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:47:17.999436   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 20:47:18.009444   21691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 20:47:18.023029   21691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 20:47:18.901432   21691 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 20:47:19.066592   21691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.057118931s)
	I0318 20:47:19.066643   21691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043585863s)
	I0318 20:47:19.066680   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.066697   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.066649   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.066738   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.066989   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067003   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.067011   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.067018   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.067076   21691 main.go:141] libmachine: (ha-315064) DBG | Closing plugin on server side
	I0318 20:47:19.067164   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067186   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.067203   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.067214   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.067222   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067236   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.067352   21691 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 20:47:19.067366   21691 round_trippers.go:469] Request Headers:
	I0318 20:47:19.067386   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:47:19.067398   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:47:19.067459   21691 main.go:141] libmachine: (ha-315064) DBG | Closing plugin on server side
	I0318 20:47:19.067459   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067494   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.078473   21691 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 20:47:19.078970   21691 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 20:47:19.078982   21691 round_trippers.go:469] Request Headers:
	I0318 20:47:19.078989   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:47:19.078995   21691 round_trippers.go:473]     Content-Type: application/json
	I0318 20:47:19.078999   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:47:19.081842   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:47:19.082129   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.082141   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.082374   21691 main.go:141] libmachine: (ha-315064) DBG | Closing plugin on server side
	I0318 20:47:19.082390   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.082400   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.084895   21691 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 20:47:19.086252   21691 addons.go:505] duration metric: took 1.351513507s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 20:47:19.086285   21691 start.go:245] waiting for cluster config update ...
	I0318 20:47:19.086300   21691 start.go:254] writing updated cluster config ...
	I0318 20:47:19.087978   21691 out.go:177] 
	I0318 20:47:19.089526   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:19.089591   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:47:19.091233   21691 out.go:177] * Starting "ha-315064-m02" control-plane node in "ha-315064" cluster
	I0318 20:47:19.092566   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:47:19.092584   21691 cache.go:56] Caching tarball of preloaded images
	I0318 20:47:19.092665   21691 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:47:19.092677   21691 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:47:19.092744   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:47:19.092888   21691 start.go:360] acquireMachinesLock for ha-315064-m02: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:47:19.092955   21691 start.go:364] duration metric: took 32.077µs to acquireMachinesLock for "ha-315064-m02"
	I0318 20:47:19.092976   21691 start.go:93] Provisioning new machine with config: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:47:19.093037   21691 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0318 20:47:19.095468   21691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 20:47:19.095533   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:19.095556   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:19.110060   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0318 20:47:19.110524   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:19.110960   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:19.110981   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:19.111319   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:19.111532   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:19.111704   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:19.111876   21691 start.go:159] libmachine.API.Create for "ha-315064" (driver="kvm2")
	I0318 20:47:19.111902   21691 client.go:168] LocalClient.Create starting
	I0318 20:47:19.111936   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:47:19.111970   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:47:19.111992   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:47:19.112065   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:47:19.112096   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:47:19.112119   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:47:19.112147   21691 main.go:141] libmachine: Running pre-create checks...
	I0318 20:47:19.112158   21691 main.go:141] libmachine: (ha-315064-m02) Calling .PreCreateCheck
	I0318 20:47:19.112327   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetConfigRaw
	I0318 20:47:19.112766   21691 main.go:141] libmachine: Creating machine...
	I0318 20:47:19.112785   21691 main.go:141] libmachine: (ha-315064-m02) Calling .Create
	I0318 20:47:19.112939   21691 main.go:141] libmachine: (ha-315064-m02) Creating KVM machine...
	I0318 20:47:19.114071   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found existing default KVM network
	I0318 20:47:19.114268   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found existing private KVM network mk-ha-315064
	I0318 20:47:19.114400   21691 main.go:141] libmachine: (ha-315064-m02) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02 ...
	I0318 20:47:19.114424   21691 main.go:141] libmachine: (ha-315064-m02) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:47:19.114494   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.114393   22037 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:47:19.114571   21691 main.go:141] libmachine: (ha-315064-m02) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:47:19.336459   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.336289   22037 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa...
	I0318 20:47:19.653315   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.653206   22037 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/ha-315064-m02.rawdisk...
	I0318 20:47:19.653352   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Writing magic tar header
	I0318 20:47:19.653366   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Writing SSH key tar header
	I0318 20:47:19.653382   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.653307   22037 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02 ...
	I0318 20:47:19.653400   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02
	I0318 20:47:19.653420   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:47:19.653435   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:47:19.653451   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02 (perms=drwx------)
	I0318 20:47:19.653470   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:47:19.653487   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:47:19.653496   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:47:19.653504   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:47:19.653511   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:47:19.653518   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home
	I0318 20:47:19.653528   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Skipping /home - not owner
	I0318 20:47:19.653543   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:47:19.653575   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:47:19.653601   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:47:19.653615   21691 main.go:141] libmachine: (ha-315064-m02) Creating domain...
	I0318 20:47:19.654390   21691 main.go:141] libmachine: (ha-315064-m02) define libvirt domain using xml: 
	I0318 20:47:19.654412   21691 main.go:141] libmachine: (ha-315064-m02) <domain type='kvm'>
	I0318 20:47:19.654422   21691 main.go:141] libmachine: (ha-315064-m02)   <name>ha-315064-m02</name>
	I0318 20:47:19.654437   21691 main.go:141] libmachine: (ha-315064-m02)   <memory unit='MiB'>2200</memory>
	I0318 20:47:19.654451   21691 main.go:141] libmachine: (ha-315064-m02)   <vcpu>2</vcpu>
	I0318 20:47:19.654462   21691 main.go:141] libmachine: (ha-315064-m02)   <features>
	I0318 20:47:19.654473   21691 main.go:141] libmachine: (ha-315064-m02)     <acpi/>
	I0318 20:47:19.654484   21691 main.go:141] libmachine: (ha-315064-m02)     <apic/>
	I0318 20:47:19.654494   21691 main.go:141] libmachine: (ha-315064-m02)     <pae/>
	I0318 20:47:19.654505   21691 main.go:141] libmachine: (ha-315064-m02)     
	I0318 20:47:19.654534   21691 main.go:141] libmachine: (ha-315064-m02)   </features>
	I0318 20:47:19.654557   21691 main.go:141] libmachine: (ha-315064-m02)   <cpu mode='host-passthrough'>
	I0318 20:47:19.654569   21691 main.go:141] libmachine: (ha-315064-m02)   
	I0318 20:47:19.654580   21691 main.go:141] libmachine: (ha-315064-m02)   </cpu>
	I0318 20:47:19.654593   21691 main.go:141] libmachine: (ha-315064-m02)   <os>
	I0318 20:47:19.654604   21691 main.go:141] libmachine: (ha-315064-m02)     <type>hvm</type>
	I0318 20:47:19.654616   21691 main.go:141] libmachine: (ha-315064-m02)     <boot dev='cdrom'/>
	I0318 20:47:19.654627   21691 main.go:141] libmachine: (ha-315064-m02)     <boot dev='hd'/>
	I0318 20:47:19.654638   21691 main.go:141] libmachine: (ha-315064-m02)     <bootmenu enable='no'/>
	I0318 20:47:19.654651   21691 main.go:141] libmachine: (ha-315064-m02)   </os>
	I0318 20:47:19.654659   21691 main.go:141] libmachine: (ha-315064-m02)   <devices>
	I0318 20:47:19.654675   21691 main.go:141] libmachine: (ha-315064-m02)     <disk type='file' device='cdrom'>
	I0318 20:47:19.654692   21691 main.go:141] libmachine: (ha-315064-m02)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/boot2docker.iso'/>
	I0318 20:47:19.654705   21691 main.go:141] libmachine: (ha-315064-m02)       <target dev='hdc' bus='scsi'/>
	I0318 20:47:19.654724   21691 main.go:141] libmachine: (ha-315064-m02)       <readonly/>
	I0318 20:47:19.654745   21691 main.go:141] libmachine: (ha-315064-m02)     </disk>
	I0318 20:47:19.654763   21691 main.go:141] libmachine: (ha-315064-m02)     <disk type='file' device='disk'>
	I0318 20:47:19.654790   21691 main.go:141] libmachine: (ha-315064-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:47:19.654815   21691 main.go:141] libmachine: (ha-315064-m02)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/ha-315064-m02.rawdisk'/>
	I0318 20:47:19.654830   21691 main.go:141] libmachine: (ha-315064-m02)       <target dev='hda' bus='virtio'/>
	I0318 20:47:19.654838   21691 main.go:141] libmachine: (ha-315064-m02)     </disk>
	I0318 20:47:19.654851   21691 main.go:141] libmachine: (ha-315064-m02)     <interface type='network'>
	I0318 20:47:19.654859   21691 main.go:141] libmachine: (ha-315064-m02)       <source network='mk-ha-315064'/>
	I0318 20:47:19.654870   21691 main.go:141] libmachine: (ha-315064-m02)       <model type='virtio'/>
	I0318 20:47:19.654880   21691 main.go:141] libmachine: (ha-315064-m02)     </interface>
	I0318 20:47:19.654901   21691 main.go:141] libmachine: (ha-315064-m02)     <interface type='network'>
	I0318 20:47:19.654916   21691 main.go:141] libmachine: (ha-315064-m02)       <source network='default'/>
	I0318 20:47:19.654926   21691 main.go:141] libmachine: (ha-315064-m02)       <model type='virtio'/>
	I0318 20:47:19.654933   21691 main.go:141] libmachine: (ha-315064-m02)     </interface>
	I0318 20:47:19.654945   21691 main.go:141] libmachine: (ha-315064-m02)     <serial type='pty'>
	I0318 20:47:19.654955   21691 main.go:141] libmachine: (ha-315064-m02)       <target port='0'/>
	I0318 20:47:19.654964   21691 main.go:141] libmachine: (ha-315064-m02)     </serial>
	I0318 20:47:19.654975   21691 main.go:141] libmachine: (ha-315064-m02)     <console type='pty'>
	I0318 20:47:19.655000   21691 main.go:141] libmachine: (ha-315064-m02)       <target type='serial' port='0'/>
	I0318 20:47:19.655017   21691 main.go:141] libmachine: (ha-315064-m02)     </console>
	I0318 20:47:19.655032   21691 main.go:141] libmachine: (ha-315064-m02)     <rng model='virtio'>
	I0318 20:47:19.655049   21691 main.go:141] libmachine: (ha-315064-m02)       <backend model='random'>/dev/random</backend>
	I0318 20:47:19.655058   21691 main.go:141] libmachine: (ha-315064-m02)     </rng>
	I0318 20:47:19.655065   21691 main.go:141] libmachine: (ha-315064-m02)     
	I0318 20:47:19.655077   21691 main.go:141] libmachine: (ha-315064-m02)     
	I0318 20:47:19.655088   21691 main.go:141] libmachine: (ha-315064-m02)   </devices>
	I0318 20:47:19.655099   21691 main.go:141] libmachine: (ha-315064-m02) </domain>
	I0318 20:47:19.655110   21691 main.go:141] libmachine: (ha-315064-m02) 
	I0318 20:47:19.661541   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:cd:c3:4e in network default
	I0318 20:47:19.662052   21691 main.go:141] libmachine: (ha-315064-m02) Ensuring networks are active...
	I0318 20:47:19.662074   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:19.662747   21691 main.go:141] libmachine: (ha-315064-m02) Ensuring network default is active
	I0318 20:47:19.663055   21691 main.go:141] libmachine: (ha-315064-m02) Ensuring network mk-ha-315064 is active
	I0318 20:47:19.663352   21691 main.go:141] libmachine: (ha-315064-m02) Getting domain xml...
	I0318 20:47:19.664011   21691 main.go:141] libmachine: (ha-315064-m02) Creating domain...
	I0318 20:47:20.891077   21691 main.go:141] libmachine: (ha-315064-m02) Waiting to get IP...
	I0318 20:47:20.892035   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:20.892420   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:20.892446   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:20.892401   22037 retry.go:31] will retry after 307.508626ms: waiting for machine to come up
	I0318 20:47:21.202013   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:21.202516   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:21.202546   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:21.202476   22037 retry.go:31] will retry after 367.474223ms: waiting for machine to come up
	I0318 20:47:21.571970   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:21.572427   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:21.572455   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:21.572380   22037 retry.go:31] will retry after 408.132027ms: waiting for machine to come up
	I0318 20:47:21.982468   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:21.983022   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:21.983053   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:21.982974   22037 retry.go:31] will retry after 501.335195ms: waiting for machine to come up
	I0318 20:47:22.485585   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:22.486050   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:22.486094   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:22.486020   22037 retry.go:31] will retry after 734.489713ms: waiting for machine to come up
	I0318 20:47:23.221785   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:23.222239   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:23.222266   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:23.222205   22037 retry.go:31] will retry after 853.9073ms: waiting for machine to come up
	I0318 20:47:24.077586   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:24.078058   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:24.078091   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:24.078010   22037 retry.go:31] will retry after 1.158273772s: waiting for machine to come up
	I0318 20:47:25.237375   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:25.237816   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:25.237840   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:25.237789   22037 retry.go:31] will retry after 1.20695979s: waiting for machine to come up
	I0318 20:47:26.446084   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:26.446524   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:26.446552   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:26.446488   22037 retry.go:31] will retry after 1.582418917s: waiting for machine to come up
	I0318 20:47:28.029813   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:28.030202   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:28.030232   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:28.030156   22037 retry.go:31] will retry after 1.8376141s: waiting for machine to come up
	I0318 20:47:29.869029   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:29.869479   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:29.869502   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:29.869440   22037 retry.go:31] will retry after 2.868778614s: waiting for machine to come up
	I0318 20:47:32.739287   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:32.739682   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:32.739703   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:32.739652   22037 retry.go:31] will retry after 2.654134326s: waiting for machine to come up
	I0318 20:47:35.395406   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:35.395790   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:35.395811   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:35.395760   22037 retry.go:31] will retry after 3.820856712s: waiting for machine to come up
	I0318 20:47:39.217916   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:39.218310   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:39.218347   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:39.218279   22037 retry.go:31] will retry after 5.323823478s: waiting for machine to come up
	I0318 20:47:44.543655   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.544031   21691 main.go:141] libmachine: (ha-315064-m02) Found IP for machine: 192.168.39.231
	I0318 20:47:44.544050   21691 main.go:141] libmachine: (ha-315064-m02) Reserving static IP address...
	I0318 20:47:44.544063   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has current primary IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.544406   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find host DHCP lease matching {name: "ha-315064-m02", mac: "52:54:00:83:47:db", ip: "192.168.39.231"} in network mk-ha-315064
	I0318 20:47:44.613338   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Getting to WaitForSSH function...
	I0318 20:47:44.613373   21691 main.go:141] libmachine: (ha-315064-m02) Reserved static IP address: 192.168.39.231
	I0318 20:47:44.613385   21691 main.go:141] libmachine: (ha-315064-m02) Waiting for SSH to be available...
	I0318 20:47:44.615919   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.616386   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.616415   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.616527   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Using SSH client type: external
	I0318 20:47:44.616554   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa (-rw-------)
	I0318 20:47:44.616596   21691 main.go:141] libmachine: (ha-315064-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:47:44.616614   21691 main.go:141] libmachine: (ha-315064-m02) DBG | About to run SSH command:
	I0318 20:47:44.616628   21691 main.go:141] libmachine: (ha-315064-m02) DBG | exit 0
	I0318 20:47:44.745056   21691 main.go:141] libmachine: (ha-315064-m02) DBG | SSH cmd err, output: <nil>: 
	I0318 20:47:44.745296   21691 main.go:141] libmachine: (ha-315064-m02) KVM machine creation complete!
	I0318 20:47:44.745623   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetConfigRaw
	I0318 20:47:44.746158   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:44.746333   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:44.746518   21691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:47:44.746533   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:47:44.747653   21691 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:47:44.747665   21691 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:47:44.747671   21691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:47:44.747679   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:44.749757   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.750127   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.750159   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.750256   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:44.750423   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.750581   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.750739   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:44.750903   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:44.751102   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:44.751116   21691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:47:44.860877   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:47:44.860926   21691 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:47:44.860936   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:44.863953   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.864310   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.864335   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.864523   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:44.864723   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.864866   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.865002   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:44.865150   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:44.865318   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:44.865331   21691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:47:44.977946   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:47:44.978024   21691 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:47:44.978031   21691 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:47:44.978040   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:44.978291   21691 buildroot.go:166] provisioning hostname "ha-315064-m02"
	I0318 20:47:44.978319   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:44.978522   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:44.981043   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.981416   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.981444   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.981595   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:44.981743   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.981913   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.982030   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:44.982172   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:44.982319   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:44.982331   21691 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064-m02 && echo "ha-315064-m02" | sudo tee /etc/hostname
	I0318 20:47:45.108741   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064-m02
	
	I0318 20:47:45.108763   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.111288   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.111666   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.111697   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.111855   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:45.112063   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.112249   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.112398   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:45.112558   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:45.112720   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:45.112737   21691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:47:45.230251   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:47:45.230275   21691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:47:45.230289   21691 buildroot.go:174] setting up certificates
	I0318 20:47:45.230299   21691 provision.go:84] configureAuth start
	I0318 20:47:45.230309   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:45.230547   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:45.233273   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.233648   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.233683   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.233859   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.235996   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.236306   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.236329   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.236459   21691 provision.go:143] copyHostCerts
	I0318 20:47:45.236484   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:47:45.236510   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:47:45.236518   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:47:45.236583   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:47:45.236677   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:47:45.236702   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:47:45.236711   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:47:45.236738   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:47:45.236801   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:47:45.236817   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:47:45.236823   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:47:45.236847   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:47:45.236918   21691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064-m02 san=[127.0.0.1 192.168.39.231 ha-315064-m02 localhost minikube]
	I0318 20:47:45.546247   21691 provision.go:177] copyRemoteCerts
	I0318 20:47:45.546410   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:47:45.546470   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.549477   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.549818   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.549849   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.550188   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:45.550376   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.550562   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:45.550718   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:45.638487   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:47:45.638568   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:47:45.666316   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:47:45.666385   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 20:47:45.692354   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:47:45.692430   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 20:47:45.717316   21691 provision.go:87] duration metric: took 487.007623ms to configureAuth
	I0318 20:47:45.717336   21691 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:47:45.717496   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:45.717563   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.720132   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.720503   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.720533   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.720732   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:45.720947   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.721128   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.721279   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:45.721420   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:45.721617   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:45.721632   21691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:47:46.004191   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:47:46.004231   21691 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:47:46.004243   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetURL
	I0318 20:47:46.005539   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Using libvirt version 6000000
	I0318 20:47:46.007767   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.008106   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.008135   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.008303   21691 main.go:141] libmachine: Docker is up and running!
	I0318 20:47:46.008322   21691 main.go:141] libmachine: Reticulating splines...
	I0318 20:47:46.008328   21691 client.go:171] duration metric: took 26.89641561s to LocalClient.Create
	I0318 20:47:46.008349   21691 start.go:167] duration metric: took 26.896473285s to libmachine.API.Create "ha-315064"
	I0318 20:47:46.008363   21691 start.go:293] postStartSetup for "ha-315064-m02" (driver="kvm2")
	I0318 20:47:46.008375   21691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:47:46.008398   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.008623   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:47:46.008648   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:46.010796   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.011124   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.011159   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.011253   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.011449   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.011607   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.011743   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:46.097024   21691 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:47:46.101794   21691 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:47:46.101813   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:47:46.101878   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:47:46.101968   21691 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:47:46.101979   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:47:46.102081   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:47:46.112735   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:47:46.138673   21691 start.go:296] duration metric: took 130.296968ms for postStartSetup
	I0318 20:47:46.138723   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetConfigRaw
	I0318 20:47:46.139238   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:46.141699   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.142076   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.142114   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.142341   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:47:46.142548   21691 start.go:128] duration metric: took 27.049500671s to createHost
	I0318 20:47:46.142569   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:46.144585   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.144949   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.144972   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.145108   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.145297   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.145460   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.145590   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.145732   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:46.145930   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:46.145941   21691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:47:46.253936   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794866.241426499
	
	I0318 20:47:46.253957   21691 fix.go:216] guest clock: 1710794866.241426499
	I0318 20:47:46.253964   21691 fix.go:229] Guest: 2024-03-18 20:47:46.241426499 +0000 UTC Remote: 2024-03-18 20:47:46.142559775 +0000 UTC m=+84.301835232 (delta=98.866724ms)
	I0318 20:47:46.253987   21691 fix.go:200] guest clock delta is within tolerance: 98.866724ms
	I0318 20:47:46.253997   21691 start.go:83] releasing machines lock for "ha-315064-m02", held for 27.161027842s
	I0318 20:47:46.254020   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.254252   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:46.256496   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.256789   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.256824   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.259159   21691 out.go:177] * Found network options:
	I0318 20:47:46.260551   21691 out.go:177]   - NO_PROXY=192.168.39.79
	W0318 20:47:46.262123   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:47:46.262157   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.262596   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.262749   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.262817   21691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:47:46.262853   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	W0318 20:47:46.262936   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:47:46.263005   21691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:47:46.263026   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:46.265347   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.265540   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.265744   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.265768   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.265951   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.266091   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.266124   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.266123   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.266243   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.266302   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.266370   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.266421   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:46.266493   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.266580   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:46.507378   21691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:47:46.514755   21691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:47:46.514815   21691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:47:46.533072   21691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:47:46.533092   21691 start.go:494] detecting cgroup driver to use...
	I0318 20:47:46.533166   21691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:47:46.550301   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:47:46.564908   21691 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:47:46.564957   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:47:46.579343   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:47:46.594924   21691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:47:46.715361   21691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:47:46.902204   21691 docker.go:233] disabling docker service ...
	I0318 20:47:46.902349   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:47:46.917107   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:47:46.930537   21691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:47:47.042445   21691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:47:47.159759   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:47:47.175718   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:47:47.199673   21691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:47:47.199736   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.211567   21691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:47:47.211625   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.223499   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.234944   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.246410   21691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:47:47.258606   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.270498   21691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.289264   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.300736   21691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:47:47.311171   21691 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:47:47.311209   21691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:47:47.326024   21691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:47:47.336401   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:47:47.473807   21691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:47:47.649394   21691 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:47:47.649464   21691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:47:47.655364   21691 start.go:562] Will wait 60s for crictl version
	I0318 20:47:47.655423   21691 ssh_runner.go:195] Run: which crictl
	I0318 20:47:47.659847   21691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:47:47.700625   21691 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:47:47.700697   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:47:47.733291   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:47:47.768461   21691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:47:47.769812   21691 out.go:177]   - env NO_PROXY=192.168.39.79
	I0318 20:47:47.771081   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:47.773323   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:47.773708   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:47.773742   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:47.773847   21691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:47:47.778823   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:47:47.793651   21691 mustload.go:65] Loading cluster: ha-315064
	I0318 20:47:47.793850   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:47.794090   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:47.794122   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:47.809130   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0318 20:47:47.809566   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:47.810031   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:47.810057   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:47.810341   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:47.810518   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:47.811833   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:47:47.812102   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:47.812124   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:47.826297   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0318 20:47:47.826621   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:47.827036   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:47.827058   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:47.827362   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:47.827530   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:47:47.827691   21691 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.231
	I0318 20:47:47.827704   21691 certs.go:194] generating shared ca certs ...
	I0318 20:47:47.827721   21691 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:47.827844   21691 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:47:47.827900   21691 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:47:47.827912   21691 certs.go:256] generating profile certs ...
	I0318 20:47:47.827991   21691 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:47:47.828015   21691 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493
	I0318 20:47:47.828028   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.231 192.168.39.254]
	I0318 20:47:48.093452   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493 ...
	I0318 20:47:48.093479   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493: {Name:mkd20d01fcb744945a4bb06b57a33915b0e35c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:48.093631   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493 ...
	I0318 20:47:48.093644   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493: {Name:mkd245430ef1aa369b0c6240cb5397c4595ada4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:48.093717   21691 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:47:48.093833   21691 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:47:48.093956   21691 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:47:48.093971   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:47:48.093982   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:47:48.093992   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:47:48.094005   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:47:48.094015   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:47:48.094027   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:47:48.094036   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:47:48.094053   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:47:48.094096   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:47:48.094127   21691 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:47:48.094135   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:47:48.094159   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:47:48.094179   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:47:48.094202   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:47:48.094239   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:47:48.094265   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.094277   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.094289   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.094319   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:47:48.097009   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:48.097391   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:47:48.097424   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:48.097554   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:47:48.097748   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:47:48.097881   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:47:48.098030   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:47:48.173144   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 20:47:48.180359   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 20:47:48.192761   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 20:47:48.198040   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 20:47:48.209648   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 20:47:48.214575   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 20:47:48.225444   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 20:47:48.230246   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 20:47:48.240368   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 20:47:48.246684   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 20:47:48.257856   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 20:47:48.262290   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 20:47:48.272552   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:47:48.301497   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:47:48.330774   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:47:48.356404   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:47:48.382230   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 20:47:48.411002   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:47:48.437284   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:47:48.464576   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:47:48.490586   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:47:48.515964   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:47:48.542875   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:47:48.569140   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 20:47:48.586895   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 20:47:48.605599   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 20:47:48.623898   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 20:47:48.641536   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 20:47:48.659192   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 20:47:48.678693   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 20:47:48.696310   21691 ssh_runner.go:195] Run: openssl version
	I0318 20:47:48.702361   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:47:48.713710   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.718609   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.718649   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.725676   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:47:48.736977   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:47:48.748349   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.753034   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.753072   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.759043   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:47:48.770340   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:47:48.781640   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.786579   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.786635   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.793043   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:47:48.804637   21691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:47:48.809529   21691 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:47:48.809582   21691 kubeadm.go:928] updating node {m02 192.168.39.231 8443 v1.28.4 crio true true} ...
	I0318 20:47:48.809676   21691 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:47:48.809710   21691 kube-vip.go:111] generating kube-vip config ...
	I0318 20:47:48.809746   21691 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:47:48.829499   21691 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:47:48.829572   21691 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:47:48.829631   21691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:47:48.840855   21691 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 20:47:48.840895   21691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 20:47:48.851675   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 20:47:48.851708   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:47:48.851764   21691 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0318 20:47:48.851802   21691 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0318 20:47:48.851777   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:47:48.858044   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 20:47:48.858081   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 20:48:26.937437   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:48:26.937537   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:48:26.944042   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 20:48:26.944078   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 20:49:09.657049   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:49:09.677449   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:49:09.677555   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:49:09.682608   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 20:49:09.682638   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 20:49:10.166149   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 20:49:10.177125   21691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 20:49:10.195166   21691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:49:10.212663   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:49:10.230083   21691 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:49:10.234495   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:49:10.248051   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:49:10.370218   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:49:10.387816   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:49:10.388256   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:49:10.388310   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:49:10.402882   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
	I0318 20:49:10.403348   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:49:10.403877   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:49:10.403899   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:49:10.404229   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:49:10.404433   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:49:10.404613   21691 start.go:316] joinCluster: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:49:10.404706   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 20:49:10.404722   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:49:10.407626   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:49:10.408109   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:49:10.408138   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:49:10.408289   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:49:10.408475   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:49:10.408653   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:49:10.408803   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:49:10.581970   21691 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:49:10.582014   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dx7kgq.irksjynle7vx4zyx --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m02 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443"
	I0318 20:49:51.873194   21691 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dx7kgq.irksjynle7vx4zyx --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m02 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443": (41.29114975s)
	I0318 20:49:51.873233   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 20:49:52.229164   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-315064-m02 minikube.k8s.io/updated_at=2024_03_18T20_49_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=ha-315064 minikube.k8s.io/primary=false
	I0318 20:49:52.365421   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-315064-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 20:49:52.491390   21691 start.go:318] duration metric: took 42.086770613s to joinCluster
	I0318 20:49:52.491470   21691 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:49:52.493205   21691 out.go:177] * Verifying Kubernetes components...
	I0318 20:49:52.491752   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:49:52.494673   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:49:52.691301   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:49:52.729729   21691 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:49:52.730098   21691 kapi.go:59] client config for ha-315064: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 20:49:52.730181   21691 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.79:8443
	I0318 20:49:52.730452   21691 node_ready.go:35] waiting up to 6m0s for node "ha-315064-m02" to be "Ready" ...
	I0318 20:49:52.730554   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:52.730603   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:52.730618   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:52.730624   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:52.746109   21691 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 20:49:53.231159   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:53.231181   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:53.231189   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:53.231193   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:53.235137   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:53.731698   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:53.731719   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:53.731727   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:53.731732   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:53.735689   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:54.231340   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:54.231359   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:54.231367   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:54.231370   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:54.235405   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:54.731346   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:54.731371   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:54.731382   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:54.731388   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:54.734547   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:54.735056   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:49:55.230973   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:55.230994   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:55.231004   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:55.231010   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:55.235442   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:55.730621   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:55.730640   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:55.730648   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:55.730651   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:55.734572   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:56.230767   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:56.230793   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:56.230802   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:56.230807   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:56.235102   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:56.730967   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:56.730988   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:56.731002   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:56.731007   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:56.734890   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:56.735745   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:49:57.230817   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:57.230837   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:57.230848   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:57.230854   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:57.234764   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:57.731135   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:57.731160   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:57.731173   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:57.731181   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:57.736522   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:49:58.230674   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:58.230705   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:58.230717   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:58.230725   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:58.234715   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:58.731046   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:58.731066   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:58.731073   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:58.731077   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:58.735536   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:58.736325   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:49:59.230963   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:59.230992   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:59.231000   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:59.231004   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:59.235006   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:59.731345   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:59.731366   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:59.731373   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:59.731377   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:59.735085   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:00.231134   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:00.231152   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:00.231159   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:00.231161   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:00.236575   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:00.731654   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:00.731677   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:00.731688   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:00.731695   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:00.735275   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:01.231376   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:01.231402   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.231412   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.231418   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.235594   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:01.236108   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:50:01.731200   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:01.731227   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.731237   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.731242   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.744476   21691 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 20:50:01.745058   21691 node_ready.go:49] node "ha-315064-m02" has status "Ready":"True"
	I0318 20:50:01.745086   21691 node_ready.go:38] duration metric: took 9.014593914s for node "ha-315064-m02" to be "Ready" ...
	I0318 20:50:01.745099   21691 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:50:01.745200   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:01.745211   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.745221   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.745232   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.750396   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:01.757942   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.758011   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgqzg
	I0318 20:50:01.758016   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.758024   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.758027   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.763286   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:01.764063   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:01.764081   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.764090   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.764097   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.766807   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.767298   21691 pod_ready.go:92] pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.767314   21691 pod_ready.go:81] duration metric: took 9.349024ms for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.767324   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.767365   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hrrzn
	I0318 20:50:01.767373   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.767379   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.767383   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.770042   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.770568   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:01.770581   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.770587   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.770591   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.772890   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.773445   21691 pod_ready.go:92] pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.773463   21691 pod_ready.go:81] duration metric: took 6.1332ms for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.773471   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.773515   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064
	I0318 20:50:01.773523   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.773530   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.773533   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.775945   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.776625   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:01.776638   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.776645   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.776647   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.778941   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.779611   21691 pod_ready.go:92] pod "etcd-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.779628   21691 pod_ready.go:81] duration metric: took 6.149827ms for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.779638   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.779692   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m02
	I0318 20:50:01.779702   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.779711   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.779720   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.782365   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.783043   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:01.783058   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.783065   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.783071   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.785832   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.786416   21691 pod_ready.go:92] pod "etcd-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.786441   21691 pod_ready.go:81] duration metric: took 6.793477ms for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.786458   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.931713   21691 request.go:629] Waited for 145.197061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064
	I0318 20:50:01.931779   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064
	I0318 20:50:01.931786   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.931793   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.931799   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.935672   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.132038   21691 request.go:629] Waited for 195.406119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.132095   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.132102   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.132109   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.132113   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.135819   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.136326   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:02.136347   21691 pod_ready.go:81] duration metric: took 349.8771ms for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.136359   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.331412   21691 request.go:629] Waited for 194.985949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m02
	I0318 20:50:02.331478   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m02
	I0318 20:50:02.331484   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.331497   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.331504   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.335255   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.531392   21691 request.go:629] Waited for 195.271299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:02.531462   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:02.531467   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.531474   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.531481   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.535658   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:02.536145   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:02.536162   21691 pod_ready.go:81] duration metric: took 399.795443ms for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.536172   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.731190   21691 request.go:629] Waited for 194.958242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:50:02.731271   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:50:02.731281   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.731289   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.731293   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.735169   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.931723   21691 request.go:629] Waited for 195.72943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.931779   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.931784   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.931791   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.931794   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.935677   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.936292   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:02.936310   21691 pod_ready.go:81] duration metric: took 400.128828ms for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.936322   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.131283   21691 request.go:629] Waited for 194.898302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:50:03.131363   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:50:03.131380   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.131388   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.131391   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.134918   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:03.332221   21691 request.go:629] Waited for 196.378538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.332304   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.332316   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.332327   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.332338   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.336513   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:03.337083   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:03.337106   21691 pod_ready.go:81] duration metric: took 400.77159ms for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.337128   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.532242   21691 request.go:629] Waited for 195.052953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:50:03.532330   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:50:03.532345   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.532359   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.532369   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.538777   21691 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 20:50:03.731734   21691 request.go:629] Waited for 192.381403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.731799   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.731806   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.731817   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.731823   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.737638   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:03.738285   21691 pod_ready.go:92] pod "kube-proxy-bccjj" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:03.738309   21691 pod_ready.go:81] duration metric: took 401.167668ms for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.738325   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.931424   21691 request.go:629] Waited for 193.02369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:50:03.931472   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:50:03.931478   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.931486   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.931498   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.936430   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.132060   21691 request.go:629] Waited for 194.396507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.132115   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.132120   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.132127   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.132132   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.136617   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.137250   21691 pod_ready.go:92] pod "kube-proxy-wrm24" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:04.137268   21691 pod_ready.go:81] duration metric: took 398.935303ms for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.137277   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.331725   21691 request.go:629] Waited for 194.337813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:50:04.331781   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:50:04.331789   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.331797   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.331801   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.336450   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.531594   21691 request.go:629] Waited for 193.365956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.531645   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.531651   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.531661   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.531667   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.535123   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:04.535892   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:04.535910   21691 pod_ready.go:81] duration metric: took 398.625255ms for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.535919   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.732072   21691 request.go:629] Waited for 196.087759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:50:04.732130   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:50:04.732135   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.732143   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.732148   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.736272   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.931908   21691 request.go:629] Waited for 194.34409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:04.931961   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:04.931966   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.931973   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.931986   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.936740   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.937266   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:04.937283   21691 pod_ready.go:81] duration metric: took 401.357763ms for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.937297   21691 pod_ready.go:38] duration metric: took 3.192182419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:50:04.937318   21691 api_server.go:52] waiting for apiserver process to appear ...
	I0318 20:50:04.937380   21691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:50:04.956032   21691 api_server.go:72] duration metric: took 12.464523612s to wait for apiserver process to appear ...
	I0318 20:50:04.956072   21691 api_server.go:88] waiting for apiserver healthz status ...
	I0318 20:50:04.956096   21691 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I0318 20:50:04.964543   21691 api_server.go:279] https://192.168.39.79:8443/healthz returned 200:
	ok
	I0318 20:50:04.964610   21691 round_trippers.go:463] GET https://192.168.39.79:8443/version
	I0318 20:50:04.964622   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.964630   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.964636   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.967011   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:04.967419   21691 api_server.go:141] control plane version: v1.28.4
	I0318 20:50:04.967438   21691 api_server.go:131] duration metric: took 11.358845ms to wait for apiserver health ...
	I0318 20:50:04.967447   21691 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 20:50:05.131838   21691 request.go:629] Waited for 164.328956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.131891   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.131906   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.131934   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.131945   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.137393   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:05.142231   21691 system_pods.go:59] 17 kube-system pods found
	I0318 20:50:05.142256   21691 system_pods.go:61] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:50:05.142261   21691 system_pods.go:61] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:50:05.142265   21691 system_pods.go:61] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:50:05.142268   21691 system_pods.go:61] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:50:05.142271   21691 system_pods.go:61] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:50:05.142274   21691 system_pods.go:61] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:50:05.142277   21691 system_pods.go:61] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:50:05.142282   21691 system_pods.go:61] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:50:05.142287   21691 system_pods.go:61] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:50:05.142294   21691 system_pods.go:61] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:50:05.142304   21691 system_pods.go:61] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:50:05.142309   21691 system_pods.go:61] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:50:05.142321   21691 system_pods.go:61] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:50:05.142326   21691 system_pods.go:61] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:50:05.142330   21691 system_pods.go:61] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:50:05.142334   21691 system_pods.go:61] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:50:05.142337   21691 system_pods.go:61] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:50:05.142346   21691 system_pods.go:74] duration metric: took 174.892878ms to wait for pod list to return data ...
	I0318 20:50:05.142356   21691 default_sa.go:34] waiting for default service account to be created ...
	I0318 20:50:05.331790   21691 request.go:629] Waited for 189.358753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:50:05.331878   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:50:05.331885   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.331892   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.331895   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.335930   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:05.336138   21691 default_sa.go:45] found service account: "default"
	I0318 20:50:05.336154   21691 default_sa.go:55] duration metric: took 193.78625ms for default service account to be created ...
	I0318 20:50:05.336164   21691 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 20:50:05.531741   21691 request.go:629] Waited for 195.502632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.531801   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.531807   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.531815   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.531821   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.538222   21691 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 20:50:05.543102   21691 system_pods.go:86] 17 kube-system pods found
	I0318 20:50:05.543123   21691 system_pods.go:89] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:50:05.543129   21691 system_pods.go:89] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:50:05.543133   21691 system_pods.go:89] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:50:05.543136   21691 system_pods.go:89] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:50:05.543140   21691 system_pods.go:89] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:50:05.543145   21691 system_pods.go:89] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:50:05.543152   21691 system_pods.go:89] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:50:05.543158   21691 system_pods.go:89] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:50:05.543168   21691 system_pods.go:89] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:50:05.543175   21691 system_pods.go:89] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:50:05.543186   21691 system_pods.go:89] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:50:05.543194   21691 system_pods.go:89] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:50:05.543202   21691 system_pods.go:89] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:50:05.543206   21691 system_pods.go:89] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:50:05.543210   21691 system_pods.go:89] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:50:05.543214   21691 system_pods.go:89] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:50:05.543217   21691 system_pods.go:89] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:50:05.543224   21691 system_pods.go:126] duration metric: took 207.051256ms to wait for k8s-apps to be running ...
	I0318 20:50:05.543232   21691 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 20:50:05.543284   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:50:05.560465   21691 system_svc.go:56] duration metric: took 17.227626ms WaitForService to wait for kubelet
	I0318 20:50:05.560487   21691 kubeadm.go:576] duration metric: took 13.06898296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:50:05.560507   21691 node_conditions.go:102] verifying NodePressure condition ...
	I0318 20:50:05.731890   21691 request.go:629] Waited for 171.304468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes
	I0318 20:50:05.731951   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes
	I0318 20:50:05.731961   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.731972   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.731980   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.736397   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:05.737322   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:50:05.737345   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:50:05.737358   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:50:05.737364   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:50:05.737369   21691 node_conditions.go:105] duration metric: took 176.857341ms to run NodePressure ...
	I0318 20:50:05.737386   21691 start.go:240] waiting for startup goroutines ...
	I0318 20:50:05.737415   21691 start.go:254] writing updated cluster config ...
	I0318 20:50:05.739691   21691 out.go:177] 
	I0318 20:50:05.741235   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:50:05.741336   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:50:05.743192   21691 out.go:177] * Starting "ha-315064-m03" control-plane node in "ha-315064" cluster
	I0318 20:50:05.744626   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:50:05.744644   21691 cache.go:56] Caching tarball of preloaded images
	I0318 20:50:05.744740   21691 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:50:05.744753   21691 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:50:05.744842   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:50:05.745066   21691 start.go:360] acquireMachinesLock for ha-315064-m03: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:50:05.745115   21691 start.go:364] duration metric: took 29.071µs to acquireMachinesLock for "ha-315064-m03"
	I0318 20:50:05.745138   21691 start.go:93] Provisioning new machine with config: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:50:05.745265   21691 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0318 20:50:05.746913   21691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 20:50:05.747001   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:05.747031   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:05.761463   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0318 20:50:05.761896   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:05.762329   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:05.762347   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:05.762700   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:05.762898   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:05.763062   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:05.763220   21691 start.go:159] libmachine.API.Create for "ha-315064" (driver="kvm2")
	I0318 20:50:05.763248   21691 client.go:168] LocalClient.Create starting
	I0318 20:50:05.763280   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:50:05.763313   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:50:05.763332   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:50:05.763396   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:50:05.763419   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:50:05.763434   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:50:05.763463   21691 main.go:141] libmachine: Running pre-create checks...
	I0318 20:50:05.763475   21691 main.go:141] libmachine: (ha-315064-m03) Calling .PreCreateCheck
	I0318 20:50:05.763645   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetConfigRaw
	I0318 20:50:05.763974   21691 main.go:141] libmachine: Creating machine...
	I0318 20:50:05.763986   21691 main.go:141] libmachine: (ha-315064-m03) Calling .Create
	I0318 20:50:05.764104   21691 main.go:141] libmachine: (ha-315064-m03) Creating KVM machine...
	I0318 20:50:05.765309   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found existing default KVM network
	I0318 20:50:05.765418   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found existing private KVM network mk-ha-315064
	I0318 20:50:05.765530   21691 main.go:141] libmachine: (ha-315064-m03) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03 ...
	I0318 20:50:05.765564   21691 main.go:141] libmachine: (ha-315064-m03) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:50:05.765607   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:05.765522   22592 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:50:05.765705   21691 main.go:141] libmachine: (ha-315064-m03) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:50:05.983831   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:05.983708   22592 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa...
	I0318 20:50:06.362974   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:06.362874   22592 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/ha-315064-m03.rawdisk...
	I0318 20:50:06.363006   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Writing magic tar header
	I0318 20:50:06.363185   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Writing SSH key tar header
	I0318 20:50:06.363638   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:06.363578   22592 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03 ...
	I0318 20:50:06.363766   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03
	I0318 20:50:06.363791   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03 (perms=drwx------)
	I0318 20:50:06.363811   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:50:06.363825   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:50:06.363839   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:50:06.363852   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:50:06.363867   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:50:06.363884   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:50:06.363899   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:50:06.363914   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:50:06.363923   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:50:06.363934   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:50:06.363944   21691 main.go:141] libmachine: (ha-315064-m03) Creating domain...
	I0318 20:50:06.363967   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home
	I0318 20:50:06.363979   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Skipping /home - not owner
	I0318 20:50:06.365029   21691 main.go:141] libmachine: (ha-315064-m03) define libvirt domain using xml: 
	I0318 20:50:06.365047   21691 main.go:141] libmachine: (ha-315064-m03) <domain type='kvm'>
	I0318 20:50:06.365054   21691 main.go:141] libmachine: (ha-315064-m03)   <name>ha-315064-m03</name>
	I0318 20:50:06.365059   21691 main.go:141] libmachine: (ha-315064-m03)   <memory unit='MiB'>2200</memory>
	I0318 20:50:06.365068   21691 main.go:141] libmachine: (ha-315064-m03)   <vcpu>2</vcpu>
	I0318 20:50:06.365079   21691 main.go:141] libmachine: (ha-315064-m03)   <features>
	I0318 20:50:06.365090   21691 main.go:141] libmachine: (ha-315064-m03)     <acpi/>
	I0318 20:50:06.365100   21691 main.go:141] libmachine: (ha-315064-m03)     <apic/>
	I0318 20:50:06.365115   21691 main.go:141] libmachine: (ha-315064-m03)     <pae/>
	I0318 20:50:06.365124   21691 main.go:141] libmachine: (ha-315064-m03)     
	I0318 20:50:06.365136   21691 main.go:141] libmachine: (ha-315064-m03)   </features>
	I0318 20:50:06.365146   21691 main.go:141] libmachine: (ha-315064-m03)   <cpu mode='host-passthrough'>
	I0318 20:50:06.365157   21691 main.go:141] libmachine: (ha-315064-m03)   
	I0318 20:50:06.365166   21691 main.go:141] libmachine: (ha-315064-m03)   </cpu>
	I0318 20:50:06.365176   21691 main.go:141] libmachine: (ha-315064-m03)   <os>
	I0318 20:50:06.365183   21691 main.go:141] libmachine: (ha-315064-m03)     <type>hvm</type>
	I0318 20:50:06.365195   21691 main.go:141] libmachine: (ha-315064-m03)     <boot dev='cdrom'/>
	I0318 20:50:06.365205   21691 main.go:141] libmachine: (ha-315064-m03)     <boot dev='hd'/>
	I0318 20:50:06.365214   21691 main.go:141] libmachine: (ha-315064-m03)     <bootmenu enable='no'/>
	I0318 20:50:06.365228   21691 main.go:141] libmachine: (ha-315064-m03)   </os>
	I0318 20:50:06.365254   21691 main.go:141] libmachine: (ha-315064-m03)   <devices>
	I0318 20:50:06.365277   21691 main.go:141] libmachine: (ha-315064-m03)     <disk type='file' device='cdrom'>
	I0318 20:50:06.365293   21691 main.go:141] libmachine: (ha-315064-m03)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/boot2docker.iso'/>
	I0318 20:50:06.365305   21691 main.go:141] libmachine: (ha-315064-m03)       <target dev='hdc' bus='scsi'/>
	I0318 20:50:06.365317   21691 main.go:141] libmachine: (ha-315064-m03)       <readonly/>
	I0318 20:50:06.365327   21691 main.go:141] libmachine: (ha-315064-m03)     </disk>
	I0318 20:50:06.365342   21691 main.go:141] libmachine: (ha-315064-m03)     <disk type='file' device='disk'>
	I0318 20:50:06.365364   21691 main.go:141] libmachine: (ha-315064-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:50:06.365384   21691 main.go:141] libmachine: (ha-315064-m03)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/ha-315064-m03.rawdisk'/>
	I0318 20:50:06.365395   21691 main.go:141] libmachine: (ha-315064-m03)       <target dev='hda' bus='virtio'/>
	I0318 20:50:06.365406   21691 main.go:141] libmachine: (ha-315064-m03)     </disk>
	I0318 20:50:06.365419   21691 main.go:141] libmachine: (ha-315064-m03)     <interface type='network'>
	I0318 20:50:06.365432   21691 main.go:141] libmachine: (ha-315064-m03)       <source network='mk-ha-315064'/>
	I0318 20:50:06.365444   21691 main.go:141] libmachine: (ha-315064-m03)       <model type='virtio'/>
	I0318 20:50:06.365453   21691 main.go:141] libmachine: (ha-315064-m03)     </interface>
	I0318 20:50:06.365465   21691 main.go:141] libmachine: (ha-315064-m03)     <interface type='network'>
	I0318 20:50:06.365474   21691 main.go:141] libmachine: (ha-315064-m03)       <source network='default'/>
	I0318 20:50:06.365486   21691 main.go:141] libmachine: (ha-315064-m03)       <model type='virtio'/>
	I0318 20:50:06.365495   21691 main.go:141] libmachine: (ha-315064-m03)     </interface>
	I0318 20:50:06.365503   21691 main.go:141] libmachine: (ha-315064-m03)     <serial type='pty'>
	I0318 20:50:06.365514   21691 main.go:141] libmachine: (ha-315064-m03)       <target port='0'/>
	I0318 20:50:06.365526   21691 main.go:141] libmachine: (ha-315064-m03)     </serial>
	I0318 20:50:06.365536   21691 main.go:141] libmachine: (ha-315064-m03)     <console type='pty'>
	I0318 20:50:06.365548   21691 main.go:141] libmachine: (ha-315064-m03)       <target type='serial' port='0'/>
	I0318 20:50:06.365560   21691 main.go:141] libmachine: (ha-315064-m03)     </console>
	I0318 20:50:06.365599   21691 main.go:141] libmachine: (ha-315064-m03)     <rng model='virtio'>
	I0318 20:50:06.365621   21691 main.go:141] libmachine: (ha-315064-m03)       <backend model='random'>/dev/random</backend>
	I0318 20:50:06.365632   21691 main.go:141] libmachine: (ha-315064-m03)     </rng>
	I0318 20:50:06.365639   21691 main.go:141] libmachine: (ha-315064-m03)     
	I0318 20:50:06.365647   21691 main.go:141] libmachine: (ha-315064-m03)     
	I0318 20:50:06.365654   21691 main.go:141] libmachine: (ha-315064-m03)   </devices>
	I0318 20:50:06.365662   21691 main.go:141] libmachine: (ha-315064-m03) </domain>
	I0318 20:50:06.365668   21691 main.go:141] libmachine: (ha-315064-m03) 
	I0318 20:50:06.371877   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:0f:e1:d4 in network default
	I0318 20:50:06.372408   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:06.372426   21691 main.go:141] libmachine: (ha-315064-m03) Ensuring networks are active...
	I0318 20:50:06.373001   21691 main.go:141] libmachine: (ha-315064-m03) Ensuring network default is active
	I0318 20:50:06.373284   21691 main.go:141] libmachine: (ha-315064-m03) Ensuring network mk-ha-315064 is active
	I0318 20:50:06.373590   21691 main.go:141] libmachine: (ha-315064-m03) Getting domain xml...
	I0318 20:50:06.374214   21691 main.go:141] libmachine: (ha-315064-m03) Creating domain...
	I0318 20:50:07.558873   21691 main.go:141] libmachine: (ha-315064-m03) Waiting to get IP...
	I0318 20:50:07.559660   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:07.560040   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:07.560068   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:07.559990   22592 retry.go:31] will retry after 310.268269ms: waiting for machine to come up
	I0318 20:50:07.872329   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:07.872701   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:07.872728   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:07.872680   22592 retry.go:31] will retry after 354.462724ms: waiting for machine to come up
	I0318 20:50:08.229217   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:08.229653   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:08.229698   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:08.229616   22592 retry.go:31] will retry after 319.179586ms: waiting for machine to come up
	I0318 20:50:08.549953   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:08.550351   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:08.550380   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:08.550323   22592 retry.go:31] will retry after 573.57697ms: waiting for machine to come up
	I0318 20:50:09.125080   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:09.125557   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:09.125578   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:09.125520   22592 retry.go:31] will retry after 568.689512ms: waiting for machine to come up
	I0318 20:50:09.696601   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:09.697117   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:09.697144   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:09.697063   22592 retry.go:31] will retry after 804.121348ms: waiting for machine to come up
	I0318 20:50:10.502794   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:10.503186   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:10.503212   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:10.503136   22592 retry.go:31] will retry after 1.129772692s: waiting for machine to come up
	I0318 20:50:11.633833   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:11.634303   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:11.634329   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:11.634258   22592 retry.go:31] will retry after 1.01162733s: waiting for machine to come up
	I0318 20:50:12.647391   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:12.647797   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:12.647826   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:12.647764   22592 retry.go:31] will retry after 1.148388807s: waiting for machine to come up
	I0318 20:50:13.797943   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:13.798312   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:13.798334   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:13.798267   22592 retry.go:31] will retry after 2.323236456s: waiting for machine to come up
	I0318 20:50:16.123668   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:16.124130   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:16.124158   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:16.124076   22592 retry.go:31] will retry after 2.064821918s: waiting for machine to come up
	I0318 20:50:18.189927   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:18.190475   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:18.190504   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:18.190399   22592 retry.go:31] will retry after 2.594877199s: waiting for machine to come up
	I0318 20:50:20.786623   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:20.787084   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:20.787112   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:20.787044   22592 retry.go:31] will retry after 3.538825148s: waiting for machine to come up
	I0318 20:50:24.327462   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:24.327890   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:24.327916   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:24.327849   22592 retry.go:31] will retry after 5.508050331s: waiting for machine to come up
	I0318 20:50:29.838279   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.838872   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.838894   21691 main.go:141] libmachine: (ha-315064-m03) Found IP for machine: 192.168.39.84
	I0318 20:50:29.838907   21691 main.go:141] libmachine: (ha-315064-m03) Reserving static IP address...
	I0318 20:50:29.839355   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find host DHCP lease matching {name: "ha-315064-m03", mac: "52:54:00:9e:ed:fb", ip: "192.168.39.84"} in network mk-ha-315064
	I0318 20:50:29.908146   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Getting to WaitForSSH function...
	I0318 20:50:29.908184   21691 main.go:141] libmachine: (ha-315064-m03) Reserved static IP address: 192.168.39.84
	I0318 20:50:29.908198   21691 main.go:141] libmachine: (ha-315064-m03) Waiting for SSH to be available...
	I0318 20:50:29.910745   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.911170   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:29.911192   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.911409   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Using SSH client type: external
	I0318 20:50:29.911426   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa (-rw-------)
	I0318 20:50:29.911450   21691 main.go:141] libmachine: (ha-315064-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:50:29.911463   21691 main.go:141] libmachine: (ha-315064-m03) DBG | About to run SSH command:
	I0318 20:50:29.911480   21691 main.go:141] libmachine: (ha-315064-m03) DBG | exit 0
	I0318 20:50:30.036845   21691 main.go:141] libmachine: (ha-315064-m03) DBG | SSH cmd err, output: <nil>: 
	I0318 20:50:30.037156   21691 main.go:141] libmachine: (ha-315064-m03) KVM machine creation complete!
	I0318 20:50:30.037500   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetConfigRaw
	I0318 20:50:30.037993   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:30.038164   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:30.038337   21691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:50:30.038357   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:50:30.039802   21691 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:50:30.039820   21691 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:50:30.039828   21691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:50:30.039837   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.041955   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.042325   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.042358   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.042508   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.042685   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.042857   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.042976   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.043135   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.043322   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.043333   21691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:50:30.148306   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:50:30.148330   21691 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:50:30.148337   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.151079   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.151470   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.151508   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.151697   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.151852   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.151955   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.152041   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.152144   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.152303   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.152317   21691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:50:30.266017   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:50:30.266090   21691 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:50:30.266104   21691 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:50:30.266116   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:30.266346   21691 buildroot.go:166] provisioning hostname "ha-315064-m03"
	I0318 20:50:30.266367   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:30.266550   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.269184   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.269593   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.269622   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.269732   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.269875   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.270030   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.270182   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.270359   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.270540   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.270557   21691 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064-m03 && echo "ha-315064-m03" | sudo tee /etc/hostname
	I0318 20:50:30.388652   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064-m03
	
	I0318 20:50:30.388688   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.391569   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.391986   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.392021   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.392165   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.392325   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.392456   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.392603   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.392764   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.393073   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.393096   21691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:50:30.515326   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:50:30.515359   21691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:50:30.515378   21691 buildroot.go:174] setting up certificates
	I0318 20:50:30.515390   21691 provision.go:84] configureAuth start
	I0318 20:50:30.515404   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:30.515737   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:30.518516   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.518911   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.518949   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.519121   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.521377   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.521727   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.521754   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.521873   21691 provision.go:143] copyHostCerts
	I0318 20:50:30.521901   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:50:30.521939   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:50:30.521950   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:50:30.522029   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:50:30.522114   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:50:30.522139   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:50:30.522149   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:50:30.522187   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:50:30.522246   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:50:30.522269   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:50:30.522278   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:50:30.522311   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:50:30.522384   21691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064-m03 san=[127.0.0.1 192.168.39.84 ha-315064-m03 localhost minikube]
	I0318 20:50:30.629470   21691 provision.go:177] copyRemoteCerts
	I0318 20:50:30.629534   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:50:30.629562   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.631999   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.632281   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.632306   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.632486   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.632687   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.632840   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.633053   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:30.718114   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:50:30.718193   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:50:30.752753   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:50:30.752825   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 20:50:30.781611   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:50:30.781691   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 20:50:30.809989   21691 provision.go:87] duration metric: took 294.58642ms to configureAuth
	I0318 20:50:30.810018   21691 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:50:30.810222   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:50:30.810296   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.812815   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.813186   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.813208   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.813386   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.813551   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.813705   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.813811   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.813966   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.814126   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.814140   21691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:50:31.114750   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:50:31.114780   21691 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:50:31.114791   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetURL
	I0318 20:50:31.116168   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Using libvirt version 6000000
	I0318 20:50:31.118277   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.118638   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.118661   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.118812   21691 main.go:141] libmachine: Docker is up and running!
	I0318 20:50:31.118839   21691 main.go:141] libmachine: Reticulating splines...
	I0318 20:50:31.118847   21691 client.go:171] duration metric: took 25.355588031s to LocalClient.Create
	I0318 20:50:31.118877   21691 start.go:167] duration metric: took 25.35565225s to libmachine.API.Create "ha-315064"
	I0318 20:50:31.118885   21691 start.go:293] postStartSetup for "ha-315064-m03" (driver="kvm2")
	I0318 20:50:31.118895   21691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:50:31.118911   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.119130   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:50:31.119156   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:31.121250   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.121638   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.121667   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.121814   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.122007   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.122175   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.122346   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:31.212613   21691 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:50:31.217241   21691 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:50:31.217260   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:50:31.217313   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:50:31.217380   21691 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:50:31.217388   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:50:31.217469   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:50:31.228138   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:50:31.255268   21691 start.go:296] duration metric: took 136.370593ms for postStartSetup
	I0318 20:50:31.255318   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetConfigRaw
	I0318 20:50:31.255859   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:31.258428   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.258767   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.258785   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.259020   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:50:31.259244   21691 start.go:128] duration metric: took 25.513966628s to createHost
	I0318 20:50:31.259273   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:31.261403   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.261787   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.261819   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.261945   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.262175   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.262367   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.262521   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.262693   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:31.262853   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:31.262863   21691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:50:31.370072   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710795031.353735337
	
	I0318 20:50:31.370095   21691 fix.go:216] guest clock: 1710795031.353735337
	I0318 20:50:31.370105   21691 fix.go:229] Guest: 2024-03-18 20:50:31.353735337 +0000 UTC Remote: 2024-03-18 20:50:31.259259981 +0000 UTC m=+249.418535446 (delta=94.475356ms)
	I0318 20:50:31.370123   21691 fix.go:200] guest clock delta is within tolerance: 94.475356ms
	I0318 20:50:31.370130   21691 start.go:83] releasing machines lock for "ha-315064-m03", held for 25.625002302s
	I0318 20:50:31.370151   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.370414   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:31.373240   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.373608   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.373637   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.375918   21691 out.go:177] * Found network options:
	I0318 20:50:31.377189   21691 out.go:177]   - NO_PROXY=192.168.39.79,192.168.39.231
	W0318 20:50:31.378336   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 20:50:31.378361   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:50:31.378373   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.378852   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.379029   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.379128   21691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:50:31.379165   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	W0318 20:50:31.379201   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 20:50:31.379226   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:50:31.379296   21691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:50:31.379317   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:31.381801   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382183   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.382211   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382230   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382377   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.382545   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.382628   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.382651   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382695   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.382782   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.382836   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:31.382951   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.383084   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.383237   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:31.627234   21691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:50:31.635144   21691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:50:31.635199   21691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:50:31.653653   21691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:50:31.653671   21691 start.go:494] detecting cgroup driver to use...
	I0318 20:50:31.653734   21691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:50:31.672558   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:50:31.687818   21691 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:50:31.687863   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:50:31.702492   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:50:31.716665   21691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:50:31.847630   21691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:50:32.006944   21691 docker.go:233] disabling docker service ...
	I0318 20:50:32.007019   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:50:32.024873   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:50:32.038915   21691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:50:32.184898   21691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:50:32.305816   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:50:32.322666   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:50:32.345134   21691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:50:32.345197   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.357483   21691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:50:32.357536   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.368637   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.379719   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.390478   21691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:50:32.401809   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.412809   21691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.431659   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.442734   21691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:50:32.452890   21691 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:50:32.452961   21691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:50:32.467849   21691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:50:32.481434   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:50:32.613711   21691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:50:32.767212   21691 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:50:32.767288   21691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:50:32.773709   21691 start.go:562] Will wait 60s for crictl version
	I0318 20:50:32.773775   21691 ssh_runner.go:195] Run: which crictl
	I0318 20:50:32.778241   21691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:50:32.822124   21691 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:50:32.822194   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:50:32.857225   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:50:32.889870   21691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:50:32.891335   21691 out.go:177]   - env NO_PROXY=192.168.39.79
	I0318 20:50:32.892662   21691 out.go:177]   - env NO_PROXY=192.168.39.79,192.168.39.231
	I0318 20:50:32.893811   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:32.896659   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:32.897093   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:32.897122   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:32.897332   21691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:50:32.901886   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:50:32.915359   21691 mustload.go:65] Loading cluster: ha-315064
	I0318 20:50:32.915552   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:50:32.915834   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:32.915875   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:32.930960   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0318 20:50:32.931427   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:32.931856   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:32.931875   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:32.932159   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:32.932342   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:50:32.933792   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:50:32.934068   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:32.934106   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:32.949583   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0318 20:50:32.949956   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:32.950373   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:32.950395   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:32.950754   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:32.950976   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:50:32.951137   21691 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.84
	I0318 20:50:32.951149   21691 certs.go:194] generating shared ca certs ...
	I0318 20:50:32.951162   21691 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:50:32.951288   21691 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:50:32.951329   21691 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:50:32.951338   21691 certs.go:256] generating profile certs ...
	I0318 20:50:32.951404   21691 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:50:32.951429   21691 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64
	I0318 20:50:32.951442   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.231 192.168.39.84 192.168.39.254]
	I0318 20:50:33.397550   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64 ...
	I0318 20:50:33.397576   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64: {Name:mk1cf00bed9b040075db0bab18edcf4ebf6316c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:50:33.397729   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64 ...
	I0318 20:50:33.397745   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64: {Name:mkb9badf278f9f48de743fb3bc639185b71cdad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:50:33.397809   21691 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:50:33.397934   21691 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:50:33.398052   21691 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:50:33.398068   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:50:33.398080   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:50:33.398093   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:50:33.398107   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:50:33.398119   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:50:33.398131   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:50:33.398142   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:50:33.398157   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:50:33.398206   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:50:33.398237   21691 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:50:33.398247   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:50:33.398268   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:50:33.398287   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:50:33.398306   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:50:33.398343   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:50:33.398370   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:50:33.398389   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:33.398406   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:50:33.398435   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:50:33.401437   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:33.401817   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:50:33.401847   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:33.402013   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:50:33.402201   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:50:33.402340   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:50:33.402517   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:50:33.477288   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 20:50:33.482978   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 20:50:33.495000   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 20:50:33.500369   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 20:50:33.516823   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 20:50:33.521352   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 20:50:33.544216   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 20:50:33.549753   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 20:50:33.561485   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 20:50:33.566199   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 20:50:33.578047   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 20:50:33.582638   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 20:50:33.594502   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:50:33.626948   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:50:33.655870   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:50:33.684351   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:50:33.711402   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 20:50:33.739421   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 20:50:33.765512   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:50:33.793246   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:50:33.819281   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:50:33.845654   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:50:33.873533   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:50:33.901553   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 20:50:33.920635   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 20:50:33.945822   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 20:50:33.965276   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 20:50:33.984548   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 20:50:34.004526   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 20:50:34.023454   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 20:50:34.042214   21691 ssh_runner.go:195] Run: openssl version
	I0318 20:50:34.048500   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:50:34.061745   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:50:34.067023   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:50:34.067070   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:50:34.073352   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:50:34.085625   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:50:34.098824   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:34.103704   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:34.103758   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:34.110213   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:50:34.122051   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:50:34.133943   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:50:34.139035   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:50:34.139074   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:50:34.145671   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:50:34.158592   21691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:50:34.163032   21691 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:50:34.163090   21691 kubeadm.go:928] updating node {m03 192.168.39.84 8443 v1.28.4 crio true true} ...
	I0318 20:50:34.163173   21691 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:50:34.163207   21691 kube-vip.go:111] generating kube-vip config ...
	I0318 20:50:34.163233   21691 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:50:34.182135   21691 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:50:34.182201   21691 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:50:34.182256   21691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:50:34.197019   21691 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 20:50:34.197073   21691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 20:50:34.208343   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 20:50:34.208394   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 20:50:34.208363   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 20:50:34.208412   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:50:34.208442   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:50:34.208475   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:50:34.208403   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:50:34.208543   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:50:34.222975   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 20:50:34.223006   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 20:50:34.223009   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 20:50:34.223020   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 20:50:34.254646   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:50:34.254734   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:50:34.352100   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 20:50:34.352146   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 20:50:35.310220   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 20:50:35.321490   21691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0318 20:50:35.340489   21691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:50:35.358546   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:50:35.376766   21691 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:50:35.381356   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:50:35.395351   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:50:35.529705   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:50:35.552088   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:50:35.552397   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:35.552433   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:35.567606   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0318 20:50:35.567950   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:35.568449   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:35.568483   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:35.568804   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:35.569031   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:50:35.569200   21691 start.go:316] joinCluster: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:50:35.569333   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 20:50:35.569352   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:50:35.572512   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:35.572940   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:50:35.572957   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:35.573167   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:50:35.573381   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:50:35.573534   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:50:35.573657   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:50:35.746637   21691 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:50:35.746696   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q9srpv.jjvmgylq5he4abea --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0318 20:51:02.069561   21691 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q9srpv.jjvmgylq5he4abea --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (26.322835647s)
	I0318 20:51:02.069596   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 20:51:02.758565   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-315064-m03 minikube.k8s.io/updated_at=2024_03_18T20_51_02_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=ha-315064 minikube.k8s.io/primary=false
	I0318 20:51:02.904468   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-315064-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 20:51:03.037245   21691 start.go:318] duration metric: took 27.468042909s to joinCluster
	I0318 20:51:03.037325   21691 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:51:03.038867   21691 out.go:177] * Verifying Kubernetes components...
	I0318 20:51:03.037612   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:51:03.040194   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:51:03.261813   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:51:03.304329   21691 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:51:03.304606   21691 kapi.go:59] client config for ha-315064: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 20:51:03.304657   21691 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.79:8443
	I0318 20:51:03.304869   21691 node_ready.go:35] waiting up to 6m0s for node "ha-315064-m03" to be "Ready" ...
	I0318 20:51:03.304966   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:03.304976   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:03.304985   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:03.304991   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:03.309985   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:03.805065   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:03.805085   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:03.805094   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:03.805099   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:03.809302   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:04.305047   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:04.305065   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:04.305073   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:04.305077   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:04.308973   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:04.805065   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:04.805089   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:04.805096   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:04.805100   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:04.809061   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:05.305087   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:05.305111   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:05.305123   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:05.305128   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:05.310505   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:05.311354   21691 node_ready.go:53] node "ha-315064-m03" has status "Ready":"False"
	I0318 20:51:05.805137   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:05.805158   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:05.805165   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:05.805169   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:05.808959   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:06.305962   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:06.305980   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:06.305988   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:06.305992   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:06.309603   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:06.805527   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:06.805551   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:06.805561   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:06.805569   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:06.809258   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:07.305941   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:07.305968   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:07.305980   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:07.305987   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:07.309210   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:07.806027   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:07.806054   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:07.806064   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:07.806071   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:07.810333   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:07.810847   21691 node_ready.go:53] node "ha-315064-m03" has status "Ready":"False"
	I0318 20:51:08.306120   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:08.306144   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:08.306154   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:08.306158   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:08.310547   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:08.805691   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:08.805712   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:08.805719   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:08.805723   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:08.809531   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:09.305280   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:09.305303   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:09.305312   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:09.305319   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:09.309264   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:09.805041   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:09.805061   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:09.805069   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:09.805075   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:09.808751   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:10.306092   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:10.306119   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:10.306126   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:10.306132   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:10.311801   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:10.313845   21691 node_ready.go:53] node "ha-315064-m03" has status "Ready":"False"
	I0318 20:51:10.805491   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:10.805511   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:10.805518   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:10.805522   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:10.809807   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:11.305002   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:11.305021   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.305029   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.305032   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.308395   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.309096   21691 node_ready.go:49] node "ha-315064-m03" has status "Ready":"True"
	I0318 20:51:11.309117   21691 node_ready.go:38] duration metric: took 8.004232778s for node "ha-315064-m03" to be "Ready" ...
	I0318 20:51:11.309127   21691 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:51:11.309190   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:11.309205   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.309215   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.309222   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.316399   21691 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 20:51:11.323391   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.323458   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgqzg
	I0318 20:51:11.323467   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.323474   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.323479   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.326209   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.326859   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:11.326874   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.326885   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.326892   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.329697   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.330177   21691 pod_ready.go:92] pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.330192   21691 pod_ready.go:81] duration metric: took 6.780065ms for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.330199   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.330250   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hrrzn
	I0318 20:51:11.330260   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.330267   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.330273   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.332706   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.333325   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:11.333336   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.333342   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.333346   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.336535   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.337452   21691 pod_ready.go:92] pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.337465   21691 pod_ready.go:81] duration metric: took 7.25922ms for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.337473   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.337507   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064
	I0318 20:51:11.337513   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.337520   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.337524   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.340356   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.340794   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:11.340807   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.340814   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.340817   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.343293   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.343698   21691 pod_ready.go:92] pod "etcd-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.343712   21691 pod_ready.go:81] duration metric: took 6.234619ms for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.343720   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.343758   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m02
	I0318 20:51:11.343765   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.343771   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.343786   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.346392   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.346888   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:11.346900   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.346906   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.346910   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.350443   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.350866   21691 pod_ready.go:92] pod "etcd-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.350880   21691 pod_ready.go:81] duration metric: took 7.154681ms for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.350887   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.505022   21691 request.go:629] Waited for 154.08429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.505091   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.505102   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.505114   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.505142   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.509099   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.705008   21691 request.go:629] Waited for 195.277006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:11.705063   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:11.705068   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.705078   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.705083   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.709058   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.905811   21691 request.go:629] Waited for 54.55273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.905863   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.905875   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.905882   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.905887   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.910034   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:12.106077   21691 request.go:629] Waited for 195.428399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.106130   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.106135   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.106143   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.106146   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.110591   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:12.351505   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:12.351525   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.351534   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.351539   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.355747   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:12.505815   21691 request.go:629] Waited for 149.297639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.505890   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.505896   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.505903   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.505908   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.509667   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:12.851220   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:12.851240   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.851251   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.851256   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.855159   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:12.906088   21691 request.go:629] Waited for 50.18813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.906158   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.906163   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.906170   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.906174   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.910097   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:13.351074   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:13.351095   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.351106   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.351115   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.356864   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:13.358162   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:13.358183   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.358194   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.358202   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.362613   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:13.363390   21691 pod_ready.go:102] pod "etcd-ha-315064-m03" in "kube-system" namespace has status "Ready":"False"
	I0318 20:51:13.851837   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:13.851869   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.851881   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.851887   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.855894   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:13.857155   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:13.857173   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.857185   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.857192   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.860373   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:14.351741   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:14.351767   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.351778   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.351782   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.356075   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:14.357252   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:14.357267   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.357277   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.357287   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.360811   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:14.851572   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:14.851603   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.851614   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.851620   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.857549   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:14.858215   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:14.858228   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.858236   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.858239   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.861864   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:15.351994   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:15.352015   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.352023   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.352027   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.358560   21691 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 20:51:15.359516   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:15.359532   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.359543   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.359546   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.364531   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:15.365243   21691 pod_ready.go:102] pod "etcd-ha-315064-m03" in "kube-system" namespace has status "Ready":"False"
	I0318 20:51:15.851857   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:15.851884   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.851892   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.851901   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.856607   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:15.857643   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:15.857658   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.857665   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.857671   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.861451   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:16.351875   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:16.351901   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.351913   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.351920   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.356532   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:16.357258   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:16.357275   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.357281   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.357286   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.360459   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:16.851441   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:16.851465   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.851477   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.851481   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.856511   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:16.857694   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:16.857715   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.857727   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.857731   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.862736   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:16.863361   21691 pod_ready.go:92] pod "etcd-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:16.863393   21691 pod_ready.go:81] duration metric: took 5.512499323s for pod "etcd-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.863418   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.863559   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064
	I0318 20:51:16.863572   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.863582   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.863587   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.867383   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:16.868157   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:16.868171   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.868181   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.868187   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.878385   21691 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 20:51:16.879349   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:16.879372   21691 pod_ready.go:81] duration metric: took 15.941575ms for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.879386   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.879459   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m02
	I0318 20:51:16.879470   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.879480   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.879491   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.890541   21691 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 20:51:16.905819   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:16.905855   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.905863   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.905868   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.910649   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:16.911519   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:16.911537   21691 pod_ready.go:81] duration metric: took 32.143615ms for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.911549   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.106023   21691 request.go:629] Waited for 194.404237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m03
	I0318 20:51:17.106123   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m03
	I0318 20:51:17.106132   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.106143   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.106156   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.110182   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:17.305434   21691 request.go:629] Waited for 194.408349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:17.305525   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:17.305536   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.305545   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.305551   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.310110   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:17.310746   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:17.310763   21691 pod_ready.go:81] duration metric: took 399.206242ms for pod "kube-apiserver-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.310772   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.505872   21691 request.go:629] Waited for 195.015201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:51:17.505933   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:51:17.505940   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.505952   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.505960   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.511609   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:17.705699   21691 request.go:629] Waited for 193.371749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:17.705756   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:17.705763   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.705773   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.705781   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.709793   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:17.710611   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:17.710630   21691 pod_ready.go:81] duration metric: took 399.850652ms for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.710644   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.905618   21691 request.go:629] Waited for 194.912966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:51:17.905702   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:51:17.905711   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.905719   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.905726   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.909529   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.106073   21691 request.go:629] Waited for 195.715176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.106133   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.106138   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.106152   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.106156   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.110132   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.110948   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:18.110965   21691 pod_ready.go:81] duration metric: took 400.313992ms for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.110975   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.305009   21691 request.go:629] Waited for 193.97322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m03
	I0318 20:51:18.305072   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m03
	I0318 20:51:18.305077   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.305084   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.305089   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.308581   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.505880   21691 request.go:629] Waited for 196.37643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:18.505932   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:18.505937   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.505944   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.505948   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.510487   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:18.511246   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:18.511260   21691 pod_ready.go:81] duration metric: took 400.279961ms for pod "kube-controller-manager-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.511270   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.705350   21691 request.go:629] Waited for 194.030068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:51:18.705441   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:51:18.705451   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.705463   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.705470   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.710633   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:18.905774   21691 request.go:629] Waited for 194.350073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.905832   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.905859   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.905875   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.905880   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.909529   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.910316   21691 pod_ready.go:92] pod "kube-proxy-bccjj" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:18.910334   21691 pod_ready.go:81] duration metric: took 399.057772ms for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.910347   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf4sq" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.105376   21691 request.go:629] Waited for 194.966609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nf4sq
	I0318 20:51:19.105445   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nf4sq
	I0318 20:51:19.105454   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.105467   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.105476   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.109039   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:19.305405   21691 request.go:629] Waited for 195.350108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:19.305468   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:19.305478   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.305491   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.305501   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.309525   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:19.310272   21691 pod_ready.go:92] pod "kube-proxy-nf4sq" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:19.310294   21691 pod_ready.go:81] duration metric: took 399.938335ms for pod "kube-proxy-nf4sq" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.310307   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.505355   21691 request.go:629] Waited for 194.963644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:51:19.505409   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:51:19.505414   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.505425   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.505429   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.510095   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:19.705193   21691 request.go:629] Waited for 194.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:19.705261   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:19.705266   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.705274   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.705280   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.710136   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:19.711409   21691 pod_ready.go:92] pod "kube-proxy-wrm24" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:19.711428   21691 pod_ready.go:81] duration metric: took 401.113388ms for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.711440   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.905593   21691 request.go:629] Waited for 194.087403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:51:19.905659   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:51:19.905666   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.905675   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.905687   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.908724   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.105780   21691 request.go:629] Waited for 196.345738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:20.105842   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:20.105849   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.105867   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.105875   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.109845   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.110571   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:20.110590   21691 pod_ready.go:81] duration metric: took 399.142924ms for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.110599   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.305624   21691 request.go:629] Waited for 194.961747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:51:20.305680   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:51:20.305686   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.305693   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.305697   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.309543   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.505549   21691 request.go:629] Waited for 195.407491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:20.505612   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:20.505617   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.505625   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.505629   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.508929   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.509783   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:20.509802   21691 pod_ready.go:81] duration metric: took 399.194649ms for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.509812   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.705149   21691 request.go:629] Waited for 195.28478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m03
	I0318 20:51:20.705205   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m03
	I0318 20:51:20.705210   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.705217   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.705222   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.709528   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:20.905742   21691 request.go:629] Waited for 195.357574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:20.905809   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:20.905816   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.905826   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.905835   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.909571   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.910180   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:20.910196   21691 pod_ready.go:81] duration metric: took 400.378831ms for pod "kube-scheduler-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.910206   21691 pod_ready.go:38] duration metric: took 9.601068459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:51:20.910226   21691 api_server.go:52] waiting for apiserver process to appear ...
	I0318 20:51:20.910272   21691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:51:20.926940   21691 api_server.go:72] duration metric: took 17.889578919s to wait for apiserver process to appear ...
	I0318 20:51:20.926962   21691 api_server.go:88] waiting for apiserver healthz status ...
	I0318 20:51:20.926978   21691 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I0318 20:51:20.931787   21691 api_server.go:279] https://192.168.39.79:8443/healthz returned 200:
	ok
	I0318 20:51:20.931838   21691 round_trippers.go:463] GET https://192.168.39.79:8443/version
	I0318 20:51:20.931843   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.931850   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.931854   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.933159   21691 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 20:51:20.933311   21691 api_server.go:141] control plane version: v1.28.4
	I0318 20:51:20.933329   21691 api_server.go:131] duration metric: took 6.360085ms to wait for apiserver health ...
	I0318 20:51:20.933339   21691 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 20:51:21.105713   21691 request.go:629] Waited for 172.311357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.105761   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.105772   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.105798   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.105804   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.113904   21691 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 20:51:21.120645   21691 system_pods.go:59] 24 kube-system pods found
	I0318 20:51:21.120676   21691 system_pods.go:61] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:51:21.120683   21691 system_pods.go:61] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:51:21.120689   21691 system_pods.go:61] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:51:21.120695   21691 system_pods.go:61] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:51:21.120701   21691 system_pods.go:61] "etcd-ha-315064-m03" [e59c305c-3942-4ac0-a78b-7f393410a0c4] Running
	I0318 20:51:21.120706   21691 system_pods.go:61] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:51:21.120712   21691 system_pods.go:61] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:51:21.120718   21691 system_pods.go:61] "kindnet-x8cpw" [19931ea9-b153-46b1-af81-56634a6a1c87] Running
	I0318 20:51:21.120724   21691 system_pods.go:61] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:51:21.120730   21691 system_pods.go:61] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:51:21.120737   21691 system_pods.go:61] "kube-apiserver-ha-315064-m03" [ed0be9ce-fa97-441b-8791-5ee60a9d5382] Running
	I0318 20:51:21.120747   21691 system_pods.go:61] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:51:21.120754   21691 system_pods.go:61] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:51:21.120765   21691 system_pods.go:61] "kube-controller-manager-ha-315064-m03" [8ad4a754-6e8d-40f5-8348-47dbbf678066] Running
	I0318 20:51:21.120771   21691 system_pods.go:61] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:51:21.120777   21691 system_pods.go:61] "kube-proxy-nf4sq" [4acc350a-a057-4bdb-9d95-ee583b48fe33] Running
	I0318 20:51:21.120784   21691 system_pods.go:61] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:51:21.120792   21691 system_pods.go:61] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:51:21.120799   21691 system_pods.go:61] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:51:21.120805   21691 system_pods.go:61] "kube-scheduler-ha-315064-m03" [0917880d-4c3d-452b-89b7-567674a24298] Running
	I0318 20:51:21.120811   21691 system_pods.go:61] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:51:21.120820   21691 system_pods.go:61] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:51:21.120826   21691 system_pods.go:61] "kube-vip-ha-315064-m03" [0d376644-8c01-4b2f-b3da-337bf602d246] Running
	I0318 20:51:21.120832   21691 system_pods.go:61] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:51:21.120841   21691 system_pods.go:74] duration metric: took 187.495665ms to wait for pod list to return data ...
	I0318 20:51:21.120855   21691 default_sa.go:34] waiting for default service account to be created ...
	I0318 20:51:21.305301   21691 request.go:629] Waited for 184.350388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:51:21.305367   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:51:21.305374   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.305384   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.305390   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.309703   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:21.309920   21691 default_sa.go:45] found service account: "default"
	I0318 20:51:21.309946   21691 default_sa.go:55] duration metric: took 189.082059ms for default service account to be created ...
	I0318 20:51:21.309958   21691 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 20:51:21.506072   21691 request.go:629] Waited for 196.048872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.506130   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.506136   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.506146   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.506152   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.513839   21691 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 20:51:21.520652   21691 system_pods.go:86] 24 kube-system pods found
	I0318 20:51:21.520682   21691 system_pods.go:89] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:51:21.520691   21691 system_pods.go:89] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:51:21.520697   21691 system_pods.go:89] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:51:21.520702   21691 system_pods.go:89] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:51:21.520708   21691 system_pods.go:89] "etcd-ha-315064-m03" [e59c305c-3942-4ac0-a78b-7f393410a0c4] Running
	I0318 20:51:21.520713   21691 system_pods.go:89] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:51:21.520718   21691 system_pods.go:89] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:51:21.520724   21691 system_pods.go:89] "kindnet-x8cpw" [19931ea9-b153-46b1-af81-56634a6a1c87] Running
	I0318 20:51:21.520731   21691 system_pods.go:89] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:51:21.520739   21691 system_pods.go:89] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:51:21.520750   21691 system_pods.go:89] "kube-apiserver-ha-315064-m03" [ed0be9ce-fa97-441b-8791-5ee60a9d5382] Running
	I0318 20:51:21.520758   21691 system_pods.go:89] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:51:21.520773   21691 system_pods.go:89] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:51:21.520780   21691 system_pods.go:89] "kube-controller-manager-ha-315064-m03" [8ad4a754-6e8d-40f5-8348-47dbbf678066] Running
	I0318 20:51:21.520787   21691 system_pods.go:89] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:51:21.520798   21691 system_pods.go:89] "kube-proxy-nf4sq" [4acc350a-a057-4bdb-9d95-ee583b48fe33] Running
	I0318 20:51:21.520806   21691 system_pods.go:89] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:51:21.520813   21691 system_pods.go:89] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:51:21.520822   21691 system_pods.go:89] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:51:21.520829   21691 system_pods.go:89] "kube-scheduler-ha-315064-m03" [0917880d-4c3d-452b-89b7-567674a24298] Running
	I0318 20:51:21.520835   21691 system_pods.go:89] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:51:21.520840   21691 system_pods.go:89] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:51:21.520847   21691 system_pods.go:89] "kube-vip-ha-315064-m03" [0d376644-8c01-4b2f-b3da-337bf602d246] Running
	I0318 20:51:21.520853   21691 system_pods.go:89] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:51:21.520864   21691 system_pods.go:126] duration metric: took 210.898433ms to wait for k8s-apps to be running ...
	I0318 20:51:21.520877   21691 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 20:51:21.520942   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:51:21.536487   21691 system_svc.go:56] duration metric: took 15.602909ms WaitForService to wait for kubelet
	I0318 20:51:21.536510   21691 kubeadm.go:576] duration metric: took 18.499152902s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:51:21.536527   21691 node_conditions.go:102] verifying NodePressure condition ...
	I0318 20:51:21.705793   21691 request.go:629] Waited for 169.199603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes
	I0318 20:51:21.705867   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes
	I0318 20:51:21.705879   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.705891   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.705903   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.709680   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:21.711133   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:51:21.711152   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:51:21.711161   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:51:21.711164   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:51:21.711168   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:51:21.711174   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:51:21.711177   21691 node_conditions.go:105] duration metric: took 174.644722ms to run NodePressure ...
	I0318 20:51:21.711187   21691 start.go:240] waiting for startup goroutines ...
	I0318 20:51:21.711205   21691 start.go:254] writing updated cluster config ...
	I0318 20:51:21.711465   21691 ssh_runner.go:195] Run: rm -f paused
	I0318 20:51:21.762537   21691 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 20:51:21.764859   21691 out.go:177] * Done! kubectl is now configured to use "ha-315064" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.555737995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795292555714621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e53bece-1a4b-49d7-b311-009365213552 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.556479885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d761ad0-bda2-4377-9514-a7b88f7a9c85 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.556530683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d761ad0-bda2-4377-9514-a7b88f7a9c85 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.558569064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d761ad0-bda2-4377-9514-a7b88f7a9c85 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.611961878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc3e2d43-1a7a-4c30-bde7-aecaf1951885 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.612115524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc3e2d43-1a7a-4c30-bde7-aecaf1951885 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.615638755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=745dc6f9-8671-45c5-9d8b-1bb6034c088c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.616131405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795292616102914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=745dc6f9-8671-45c5-9d8b-1bb6034c088c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.616868391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ea3740c-aeb2-49b8-a8cc-7967563eb034 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.616917783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ea3740c-aeb2-49b8-a8cc-7967563eb034 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.617238251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ea3740c-aeb2-49b8-a8cc-7967563eb034 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.660348675Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=692e8a90-7a40-47fb-a512-e5f3fb6b5c93 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.660442049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=692e8a90-7a40-47fb-a512-e5f3fb6b5c93 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.661474851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2eb9b96f-9360-496f-91de-8f32f5f27d3e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.662205941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795292662181263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eb9b96f-9360-496f-91de-8f32f5f27d3e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.662682226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00bf5645-1e76-444c-aa25-7d1d048e7c7e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.662729899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00bf5645-1e76-444c-aa25-7d1d048e7c7e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.662980175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00bf5645-1e76-444c-aa25-7d1d048e7c7e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.714144487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=521c2b54-f0b8-4d15-93a0-dd7401570571 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.714217495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=521c2b54-f0b8-4d15-93a0-dd7401570571 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.715571831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ec34d90-fe16-4f16-8669-9b43a9667534 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.715980747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795292715960779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ec34d90-fe16-4f16-8669-9b43a9667534 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.716849004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e7dcc1e-5274-40c5-9f05-cf51fd22178c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.716899506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e7dcc1e-5274-40c5-9f05-cf51fd22178c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:54:52 ha-315064 crio[681]: time="2024-03-18 20:54:52.717246596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e7dcc1e-5274-40c5-9f05-cf51fd22178c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	962d0c8af6a9a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   b1e1139d7a57e       busybox-5b5d89c9d6-c7lzc
	3e90a0712d87d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   154ec2a128fe5       kube-vip-ha-315064
	10b2ec1f74690       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   9426401fe1ab3       storage-provisioner
	2fff81c800b42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   9426401fe1ab3       storage-provisioner
	bfac5d0e77417       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   b9df3be0d9588       coredns-5dd5756b68-fgqzg
	d5c124916621e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   868a925ed8d8e       coredns-5dd5756b68-hrrzn
	a7126db5f2812       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   82cdf74551960       kindnet-tbghx
	df303842f5387       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   01b267bb0cc88       kube-proxy-wrm24
	6c6e1dea6afc7       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Exited              kube-vip                  0                   154ec2a128fe5       kube-vip-ha-315064
	1a42f9c834d0e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago       Running             kube-scheduler            0                   b8f2e721ddf5c       kube-scheduler-ha-315064
	80a67e792a683       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago       Running             kube-apiserver            0                   73af5e6e2e583       kube-apiserver-ha-315064
	3dfd1d922dc88       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago       Running             etcd                      0                   2223b5076d0b6       etcd-ha-315064
	4480ab4493cfa       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago       Running             kube-controller-manager   0                   7f93400f03a78       kube-controller-manager-ha-315064
	
	
	==> coredns [bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7] <==
	[INFO] 10.244.1.2:40332 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001582028s
	[INFO] 10.244.1.2:33788 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001595063s
	[INFO] 10.244.0.4:57531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110111s
	[INFO] 10.244.0.4:51555 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000080591s
	[INFO] 10.244.2.2:56578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147831s
	[INFO] 10.244.2.2:53449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00029003s
	[INFO] 10.244.2.2:60915 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002720288s
	[INFO] 10.244.2.2:36698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016504s
	[INFO] 10.244.1.2:42460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174593s
	[INFO] 10.244.1.2:45245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000246387s
	[INFO] 10.244.1.2:41375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008534s
	[INFO] 10.244.1.2:50419 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000325333s
	[INFO] 10.244.1.2:44785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147222s
	[INFO] 10.244.0.4:53351 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001837559s
	[INFO] 10.244.0.4:56449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081811s
	[INFO] 10.244.0.4:52543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112771s
	[INFO] 10.244.2.2:45761 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195234s
	[INFO] 10.244.2.2:59241 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120541s
	[INFO] 10.244.1.2:34891 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210022s
	[INFO] 10.244.1.2:34411 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010526s
	[INFO] 10.244.0.4:35654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123761s
	[INFO] 10.244.0.4:55976 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121291s
	[INFO] 10.244.2.2:60584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000199858s
	[INFO] 10.244.2.2:57089 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139333s
	[INFO] 10.244.1.2:47817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142953s
	
	
	==> coredns [d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea] <==
	[INFO] 10.244.2.2:59025 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167194s
	[INFO] 10.244.2.2:49800 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116262s
	[INFO] 10.244.1.2:34969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001875316s
	[INFO] 10.244.1.2:45722 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148316s
	[INFO] 10.244.1.2:51432 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00155768s
	[INFO] 10.244.0.4:35472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011908s
	[INFO] 10.244.0.4:59665 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225277s
	[INFO] 10.244.0.4:48478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082298s
	[INFO] 10.244.0.4:58488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037583s
	[INFO] 10.244.0.4:52714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122718s
	[INFO] 10.244.2.2:38213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144668s
	[INFO] 10.244.2.2:33237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140758s
	[INFO] 10.244.1.2:55432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156014s
	[INFO] 10.244.1.2:43813 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140774s
	[INFO] 10.244.0.4:56118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008172s
	[INFO] 10.244.0.4:50788 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172997s
	[INFO] 10.244.2.2:59802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176543s
	[INFO] 10.244.2.2:48593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240495s
	[INFO] 10.244.1.2:57527 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153491s
	[INFO] 10.244.1.2:41470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189177s
	[INFO] 10.244.1.2:34055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148936s
	[INFO] 10.244.0.4:58773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274692s
	[INFO] 10.244.0.4:38762 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072594s
	[INFO] 10.244.0.4:34340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059481s
	[INFO] 10.244.0.4:56101 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011093s
	
	
	==> describe nodes <==
	Name:               ha-315064
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T20_47_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:47:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:54:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-315064
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f9d3eed04b4b99974be1860661f403
	  System UUID:                67f9d3ee-d04b-4b99-974b-e1860661f403
	  Boot ID:                    da42c8d7-0f88-49a8-83c7-2bcbed46eb7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-c7lzc             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 coredns-5dd5756b68-fgqzg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m36s
	  kube-system                 coredns-5dd5756b68-hrrzn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m36s
	  kube-system                 etcd-ha-315064                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kindnet-tbghx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m36s
	  kube-system                 kube-apiserver-ha-315064             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-controller-manager-ha-315064    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-wrm24                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-scheduler-ha-315064             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-vip-ha-315064                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m34s  kube-proxy       
	  Normal  Starting                 7m46s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m46s  kubelet          Node ha-315064 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m46s  kubelet          Node ha-315064 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m46s  kubelet          Node ha-315064 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m46s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m36s  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal  NodeReady                7m30s  kubelet          Node ha-315064 status is now: NodeReady
	  Normal  RegisteredNode           4m50s  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal  RegisteredNode           3m37s  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	
	
	Name:               ha-315064-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_49_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:49:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:52:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-315064-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 84b0eca72c194ee2b4b37351cd8bc63f
	  System UUID:                84b0eca7-2c19-4ee2-b4b3-7351cd8bc63f
	  Boot ID:                    0bb32325-70b1-4a0c-8d83-e3322fb70efd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7z7sj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-315064-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kindnet-dvtw7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m19s
	  kube-system                 kube-apiserver-ha-315064-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-ha-315064-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-bccjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-ha-315064-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-315064-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m57s  kube-proxy       
	  Normal  RegisteredNode  4m50s  node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode  3m37s  node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  NodeNotReady    102s   node-controller  Node ha-315064-m02 status is now: NodeNotReady
	
	
	Name:               ha-315064-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_51_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:50:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:54:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:50:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:50:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:50:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-315064-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf0ce8da0ac342e5b4cd58e80d68360c
	  System UUID:                cf0ce8da-0ac3-42e5-b4cd-58e80d68360c
	  Boot ID:                    d08a8a9e-b8e0-4b9d-a83b-1485ac5ce43c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-5hmqj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-315064-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m47s
	  kube-system                 kindnet-x8cpw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-315064-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-controller-manager-ha-315064-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-proxy-nf4sq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-315064-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-vip-ha-315064-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m48s  kube-proxy       
	  Normal  RegisteredNode  3m51s  node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal  RegisteredNode  3m50s  node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal  RegisteredNode  3m37s  node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	
	
	Name:               ha-315064-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_52_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:54:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-315064-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e505b03139344fc9b8ceffed32c9bea6
	  System UUID:                e505b031-3934-4fc9-b8ce-ffed32c9bea6
	  Boot ID:                    2195ee59-5053-4efb-a904-3189e0b7888f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwjjr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-dhhjx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m51s (x5 over 2m53s)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x5 over 2m53s)  kubelet          Node ha-315064-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x5 over 2m53s)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal  RegisteredNode           2m46s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-315064-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar18 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042795] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.565276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.402518] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.662299] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.134199] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.060622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061926] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.170895] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.158641] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.304087] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +5.155955] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.063498] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.791144] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.535740] kauditd_printk_skb: 57 callbacks suppressed
	[Mar18 20:47] kauditd_printk_skb: 35 callbacks suppressed
	[  +2.157125] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[ +10.330891] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.069969] kauditd_printk_skb: 36 callbacks suppressed
	[Mar18 20:49] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d] <==
	{"level":"warn","ts":"2024-03-18T20:54:53.018203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.029288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.035884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.052364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.077262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.088499Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.088702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.09275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.096831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.104215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.110959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.120952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.124868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.128704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.140764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.148196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.1578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.163764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.167863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.174226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.185257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.188166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.205283Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.282514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:54:53.289318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:54:53 up 8 min,  0 users,  load average: 0.35, 0.32, 0.18
	Linux ha-315064 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014] <==
	I0318 20:54:20.132794       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 20:54:30.145386       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 20:54:30.145436       1 main.go:227] handling current node
	I0318 20:54:30.145456       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 20:54:30.145462       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 20:54:30.145667       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 20:54:30.145702       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 20:54:30.145764       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 20:54:30.145794       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 20:54:40.152621       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 20:54:40.152665       1 main.go:227] handling current node
	I0318 20:54:40.152675       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 20:54:40.152681       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 20:54:40.152799       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 20:54:40.152831       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 20:54:40.152890       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 20:54:40.152926       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 20:54:50.160018       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 20:54:50.160274       1 main.go:227] handling current node
	I0318 20:54:50.160300       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 20:54:50.160306       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 20:54:50.160453       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 20:54:50.160486       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 20:54:50.160549       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 20:54:50.160580       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81] <==
	Trace[296947821]: ---"Write to database call failed" len:2996,err:etcdserver: leader changed 7232ms (20:49:49.545)
	Trace[296947821]: [7.232766718s] [7.232766718s] END
	I0318 20:49:49.605535       1 trace.go:236] Trace[1402986245]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4a7d370c-2ca9-49d2-8803-257dba6db4c6,client:192.168.39.231,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-315064-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (18-Mar-2024 20:49:44.975) (total time: 4630ms):
	Trace[1402986245]: ["GuaranteedUpdate etcd3" audit-id:4a7d370c-2ca9-49d2-8803-257dba6db4c6,key:/minions/ha-315064-m02,type:*core.Node,resource:nodes 4629ms (20:49:44.975)
	Trace[1402986245]:  ---"Txn call completed" 4626ms (20:49:49.605)]
	Trace[1402986245]: ---"Object stored in database" 4628ms (20:49:49.605)
	Trace[1402986245]: [4.63024099s] [4.63024099s] END
	I0318 20:49:49.607436       1 trace.go:236] Trace[1594722573]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4b136828-60e3-45ad-beb9-94863dc9aae1,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-4t3sitwd5gbl3axy65q2vglx6a,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 20:49:45.144) (total time: 4462ms):
	Trace[1594722573]: ["GuaranteedUpdate etcd3" audit-id:4b136828-60e3-45ad-beb9-94863dc9aae1,key:/leases/kube-system/apiserver-4t3sitwd5gbl3axy65q2vglx6a,type:*coordination.Lease,resource:leases.coordination.k8s.io 4462ms (20:49:45.144)
	Trace[1594722573]:  ---"Txn call completed" 4461ms (20:49:49.607)]
	Trace[1594722573]: [4.462900672s] [4.462900672s] END
	I0318 20:49:49.608002       1 trace.go:236] Trace[1196706199]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f965a1ef-c854-450f-8599-0a2b535aa72d,client:192.168.39.231,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 20:49:43.596) (total time: 6011ms):
	Trace[1196706199]: ["Create etcd3" audit-id:f965a1ef-c854-450f-8599-0a2b535aa72d,key:/events/kube-system/kube-vip-ha-315064-m02.17bdf6f7934b81c5,type:*core.Event,resource:events 6010ms (20:49:43.597)
	Trace[1196706199]:  ---"Txn call succeeded" 6010ms (20:49:49.607)]
	Trace[1196706199]: [6.011226268s] [6.011226268s] END
	I0318 20:49:49.610523       1 trace.go:236] Trace[656951406]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d5d3b0f4-8498-475c-a4ff-62ea8cdd9e02,client:192.168.39.79,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-315064-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (18-Mar-2024 20:49:47.331) (total time: 2278ms):
	Trace[656951406]: ["GuaranteedUpdate etcd3" audit-id:d5d3b0f4-8498-475c-a4ff-62ea8cdd9e02,key:/minions/ha-315064-m02,type:*core.Node,resource:nodes 2278ms (20:49:47.331)
	Trace[656951406]:  ---"Txn call completed" 2274ms (20:49:49.608)]
	Trace[656951406]: ---"About to apply patch" 2275ms (20:49:49.608)
	Trace[656951406]: [2.278832914s] [2.278832914s] END
	I0318 20:49:49.646349       1 trace.go:236] Trace[571068067]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:26da06d3-3f80-46d8-9a47-317bc5453de2,client:192.168.39.231,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 20:49:44.320) (total time: 5326ms):
	Trace[571068067]: [5.326171145s] [5.326171145s] END
	I0318 20:49:49.654763       1 trace.go:236] Trace[462604528]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0802db0a-a1ee-4bbb-ac65-29622b29adc0,client:192.168.39.231,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 20:49:43.317) (total time: 6337ms):
	Trace[462604528]: [6.337589479s] [6.337589479s] END
	W0318 20:52:43.542685       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.79 192.168.39.84]
	
	
	==> kube-controller-manager [4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c] <==
	I0318 20:51:23.327294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="73.263µs"
	I0318 20:51:23.334501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="102.605µs"
	I0318 20:51:23.413872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.890061ms"
	I0318 20:51:23.414173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="145.932µs"
	I0318 20:51:26.640351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.728609ms"
	I0318 20:51:26.640410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="32.692µs"
	I0318 20:51:26.789708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.092066ms"
	I0318 20:51:26.789821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.014µs"
	I0318 20:51:27.032696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="19.893718ms"
	I0318 20:51:27.033415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="110.676µs"
	E0318 20:52:00.689350       1 certificate_controller.go:146] Sync csr-c2rvn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c2rvn": the object has been modified; please apply your changes to the latest version and try again
	E0318 20:52:00.704777       1 certificate_controller.go:146] Sync csr-c2rvn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c2rvn": the object has been modified; please apply your changes to the latest version and try again
	I0318 20:52:02.460635       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-315064-m04\" does not exist"
	I0318 20:52:02.486150       1 range_allocator.go:380] "Set node PodCIDR" node="ha-315064-m04" podCIDRs=["10.244.3.0/24"]
	I0318 20:52:02.540204       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t4cmt"
	I0318 20:52:02.540361       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dhhjx"
	I0318 20:52:02.801228       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-bl5jr"
	I0318 20:52:02.802628       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-qdg66"
	I0318 20:52:02.810597       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-ssp7z"
	I0318 20:52:07.264188       1 event.go:307] "Event occurred" object="ha-315064-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller"
	I0318 20:52:07.281193       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-315064-m04"
	I0318 20:52:11.378440       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-315064-m04"
	I0318 20:53:11.925932       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-315064-m04"
	I0318 20:53:12.001412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.520303ms"
	I0318 20:53:12.002462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.593µs"
	
	
	==> kube-proxy [df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a] <==
	I0318 20:47:18.215548       1 server_others.go:69] "Using iptables proxy"
	I0318 20:47:18.231786       1 node.go:141] Successfully retrieved node IP: 192.168.39.79
	I0318 20:47:18.348869       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 20:47:18.348893       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 20:47:18.351936       1 server_others.go:152] "Using iptables Proxier"
	I0318 20:47:18.352529       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 20:47:18.352858       1 server.go:846] "Version info" version="v1.28.4"
	I0318 20:47:18.352869       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 20:47:18.361190       1 config.go:188] "Starting service config controller"
	I0318 20:47:18.361678       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 20:47:18.361711       1 config.go:97] "Starting endpoint slice config controller"
	I0318 20:47:18.361715       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 20:47:18.363802       1 config.go:315] "Starting node config controller"
	I0318 20:47:18.363863       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 20:47:18.465585       1 shared_informer.go:318] Caches are synced for service config
	I0318 20:47:18.465657       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 20:47:18.465978       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62] <==
	I0318 20:51:22.845862       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-7z7sj" node="ha-315064-m02"
	E0318 20:51:22.852463       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-c7lzc\": pod busybox-5b5d89c9d6-c7lzc is already assigned to node \"ha-315064\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-c7lzc" node="ha-315064"
	E0318 20:51:22.852530       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b(default/busybox-5b5d89c9d6-c7lzc) wasn't assumed so cannot be forgotten"
	E0318 20:51:22.852559       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-c7lzc\": pod busybox-5b5d89c9d6-c7lzc is already assigned to node \"ha-315064\"" pod="default/busybox-5b5d89c9d6-c7lzc"
	I0318 20:51:22.852583       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-c7lzc" node="ha-315064"
	E0318 20:52:02.626499       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dhhjx\": pod kube-proxy-dhhjx is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dhhjx" node="ha-315064-m04"
	E0318 20:52:02.626993       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod c1714ef0-05aa-46ae-9e20-215a6ce0b13b(kube-system/kube-proxy-dhhjx) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.627298       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dhhjx\": pod kube-proxy-dhhjx is already assigned to node \"ha-315064-m04\"" pod="kube-system/kube-proxy-dhhjx"
	I0318 20:52:02.627457       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dhhjx" node="ha-315064-m04"
	E0318 20:52:02.647637       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t4cmt\": pod kindnet-t4cmt is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t4cmt" node="ha-315064-m04"
	E0318 20:52:02.647879       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 0f71828b-9b62-43d2-ae99-304677e7535c(kube-system/kindnet-t4cmt) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.648085       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t4cmt\": pod kindnet-t4cmt is already assigned to node \"ha-315064-m04\"" pod="kube-system/kindnet-t4cmt"
	I0318 20:52:02.648204       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t4cmt" node="ha-315064-m04"
	E0318 20:52:02.722386       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ssp7z\": pod kindnet-ssp7z is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ssp7z" node="ha-315064-m04"
	E0318 20:52:02.724006       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod f0ef0560-8258-4d76-b09d-a6f400e388cf(kube-system/kindnet-ssp7z) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.723157       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qdg66\": pod kube-proxy-qdg66 is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qdg66" node="ha-315064-m04"
	E0318 20:52:02.724575       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 28b889b8-8098-4966-8984-abb855c84d0b(kube-system/kube-proxy-qdg66) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.724597       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qdg66\": pod kube-proxy-qdg66 is already assigned to node \"ha-315064-m04\"" pod="kube-system/kube-proxy-qdg66"
	I0318 20:52:02.724613       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qdg66" node="ha-315064-m04"
	E0318 20:52:02.724690       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ssp7z\": pod kindnet-ssp7z is already assigned to node \"ha-315064-m04\"" pod="kube-system/kindnet-ssp7z"
	I0318 20:52:02.724964       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ssp7z" node="ha-315064-m04"
	E0318 20:52:02.749937       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwjjr\": pod kindnet-rwjjr is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwjjr" node="ha-315064-m04"
	E0318 20:52:02.750089       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e5f58aa1-891b-47d6-ad96-6896c8500bf5(kube-system/kindnet-rwjjr) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.750127       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwjjr\": pod kindnet-rwjjr is already assigned to node \"ha-315064-m04\"" pod="kube-system/kindnet-rwjjr"
	I0318 20:52:02.750149       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwjjr" node="ha-315064-m04"
	
	
	==> kubelet <==
	Mar 18 20:50:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:50:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:50:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:51:07 ha-315064 kubelet[1363]: E0318 20:51:07.735985    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:51:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:51:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:51:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:51:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:51:22 ha-315064 kubelet[1363]: I0318 20:51:22.818888    1363 topology_manager.go:215] "Topology Admit Handler" podUID="3878d9ed-31cf-4a22-9a2e-9866d43fdb8b" podNamespace="default" podName="busybox-5b5d89c9d6-c7lzc"
	Mar 18 20:51:22 ha-315064 kubelet[1363]: I0318 20:51:22.864469    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmpq\" (UniqueName: \"kubernetes.io/projected/3878d9ed-31cf-4a22-9a2e-9866d43fdb8b-kube-api-access-zjmpq\") pod \"busybox-5b5d89c9d6-c7lzc\" (UID: \"3878d9ed-31cf-4a22-9a2e-9866d43fdb8b\") " pod="default/busybox-5b5d89c9d6-c7lzc"
	Mar 18 20:52:07 ha-315064 kubelet[1363]: E0318 20:52:07.737600    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:52:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:52:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:52:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:52:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:53:07 ha-315064 kubelet[1363]: E0318 20:53:07.736453    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:53:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:53:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:53:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:53:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:54:07 ha-315064 kubelet[1363]: E0318 20:54:07.740706    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:54:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:54:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:54:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:54:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-315064 -n ha-315064
helpers_test.go:261: (dbg) Run:  kubectl --context ha-315064 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (3.21441954s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:54:57.888733   26272 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:54:57.888844   26272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:54:57.888853   26272 out.go:304] Setting ErrFile to fd 2...
	I0318 20:54:57.888857   26272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:54:57.889052   26272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:54:57.889217   26272 out.go:298] Setting JSON to false
	I0318 20:54:57.889241   26272 mustload.go:65] Loading cluster: ha-315064
	I0318 20:54:57.889293   26272 notify.go:220] Checking for updates...
	I0318 20:54:57.889776   26272 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:54:57.889800   26272 status.go:255] checking status of ha-315064 ...
	I0318 20:54:57.890261   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:57.890316   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:57.909020   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0318 20:54:57.909395   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:57.910038   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:54:57.910056   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:57.910371   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:57.910560   26272 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:54:57.912094   26272 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:54:57.912110   26272 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:54:57.912361   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:57.912390   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:57.926713   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40929
	I0318 20:54:57.927029   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:57.927419   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:54:57.927452   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:57.927745   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:57.927931   26272 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:54:57.930283   26272 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:57.930661   26272 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:54:57.930694   26272 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:57.930893   26272 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:54:57.931305   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:57.931386   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:57.945058   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0318 20:54:57.945386   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:57.945785   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:54:57.945808   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:57.946112   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:57.946313   26272 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:54:57.946532   26272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:54:57.946557   26272 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:54:57.948988   26272 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:57.949391   26272 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:54:57.949413   26272 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:54:57.949569   26272 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:54:57.949734   26272 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:54:57.949931   26272 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:54:57.950081   26272 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:54:58.029075   26272 ssh_runner.go:195] Run: systemctl --version
	I0318 20:54:58.035405   26272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:54:58.052785   26272 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:54:58.052809   26272 api_server.go:166] Checking apiserver status ...
	I0318 20:54:58.052844   26272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:54:58.069627   26272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:54:58.080748   26272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:54:58.080789   26272 ssh_runner.go:195] Run: ls
	I0318 20:54:58.086674   26272 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:54:58.095183   26272 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:54:58.095204   26272 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:54:58.095217   26272 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:54:58.095237   26272 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:54:58.095517   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:58.095557   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:58.110144   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I0318 20:54:58.110523   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:58.110996   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:54:58.111019   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:58.111324   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:58.111499   26272 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:54:58.112970   26272 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:54:58.112985   26272 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:54:58.113273   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:58.113314   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:58.126638   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0318 20:54:58.127077   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:58.127588   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:54:58.127616   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:58.127941   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:58.128114   26272 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:54:58.130488   26272 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:58.130886   26272 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:54:58.130905   26272 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:58.131059   26272 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:54:58.131330   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:54:58.131379   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:54:58.144637   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0318 20:54:58.144979   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:54:58.145365   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:54:58.145386   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:54:58.145683   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:54:58.145856   26272 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:54:58.146033   26272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:54:58.146052   26272 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:54:58.148241   26272 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:58.148599   26272 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:54:58.148641   26272 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:54:58.148718   26272 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:54:58.148882   26272 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:54:58.149039   26272 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:54:58.149180   26272 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:55:00.681132   26272 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:00.681201   26272 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:55:00.681217   26272 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:00.681225   26272 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:55:00.681258   26272 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:00.681266   26272 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:00.681555   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:00.681593   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:00.696953   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0318 20:55:00.697469   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:00.697962   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:55:00.697996   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:00.698280   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:00.698493   26272 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:00.699834   26272 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:00.699850   26272 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:00.700140   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:00.700193   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:00.714184   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0318 20:55:00.714583   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:00.715044   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:55:00.715057   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:00.715348   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:00.715525   26272 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:00.718065   26272 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:00.718453   26272 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:00.718479   26272 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:00.718594   26272 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:00.718909   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:00.718950   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:00.734400   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 20:55:00.734794   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:00.735314   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:55:00.735336   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:00.735614   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:00.735784   26272 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:00.735948   26272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:00.735964   26272 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:00.738577   26272 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:00.738946   26272 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:00.738981   26272 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:00.739126   26272 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:00.739286   26272 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:00.739452   26272 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:00.739580   26272 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:00.826654   26272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:00.853049   26272 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:00.853072   26272 api_server.go:166] Checking apiserver status ...
	I0318 20:55:00.853103   26272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:00.869698   26272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:00.881426   26272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:00.881487   26272 ssh_runner.go:195] Run: ls
	I0318 20:55:00.886343   26272 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:00.893594   26272 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:00.893615   26272 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:00.893623   26272 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:00.893637   26272 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:00.893905   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:00.893940   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:00.908568   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I0318 20:55:00.908890   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:00.909309   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:55:00.909332   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:00.909645   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:00.909828   26272 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:00.911267   26272 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:00.911284   26272 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:00.911552   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:00.911581   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:00.925696   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0318 20:55:00.926078   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:00.926518   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:55:00.926540   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:00.926875   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:00.927023   26272 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:00.929615   26272 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:00.930071   26272 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:00.930100   26272 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:00.930167   26272 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:00.930439   26272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:00.930473   26272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:00.944749   26272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0318 20:55:00.945236   26272 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:00.945653   26272 main.go:141] libmachine: Using API Version  1
	I0318 20:55:00.945669   26272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:00.945957   26272 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:00.946156   26272 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:00.946322   26272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:00.946351   26272 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:00.948849   26272 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:00.949264   26272 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:00.949283   26272 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:00.949433   26272 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:00.949580   26272 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:00.949710   26272 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:00.949804   26272 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:01.033622   26272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:01.051504   26272 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (5.474931464s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:01.771498   26367 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:01.771696   26367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:01.771710   26367 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:01.771716   26367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:01.771944   26367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:01.772164   26367 out.go:298] Setting JSON to false
	I0318 20:55:01.772191   26367 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:01.772328   26367 notify.go:220] Checking for updates...
	I0318 20:55:01.772688   26367 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:01.772704   26367 status.go:255] checking status of ha-315064 ...
	I0318 20:55:01.773173   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:01.773219   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:01.788358   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I0318 20:55:01.788798   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:01.789395   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:01.789416   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:01.789808   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:01.790007   26367 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:01.791574   26367 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:01.791595   26367 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:01.792002   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:01.792054   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:01.807194   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0318 20:55:01.807587   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:01.807950   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:01.807977   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:01.808269   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:01.808427   26367 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:01.811097   26367 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:01.811544   26367 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:01.811573   26367 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:01.811730   26367 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:01.812013   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:01.812050   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:01.826709   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34765
	I0318 20:55:01.827121   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:01.827581   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:01.827601   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:01.827892   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:01.828067   26367 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:01.828260   26367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:01.828287   26367 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:01.830803   26367 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:01.831186   26367 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:01.831211   26367 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:01.831356   26367 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:01.831512   26367 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:01.831664   26367 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:01.831807   26367 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:01.913790   26367 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:01.920733   26367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:01.936257   26367 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:01.936279   26367 api_server.go:166] Checking apiserver status ...
	I0318 20:55:01.936319   26367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:01.951518   26367 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:01.961795   26367 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:01.961834   26367 ssh_runner.go:195] Run: ls
	I0318 20:55:01.966749   26367 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:01.973498   26367 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:01.973517   26367 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:01.973525   26367 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:01.973541   26367 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:01.973808   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:01.973840   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:01.989307   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0318 20:55:01.989731   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:01.990206   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:01.990227   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:01.990592   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:01.990750   26367 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:01.992479   26367 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:55:01.992497   26367 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:01.992886   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:01.992947   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:02.007312   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0318 20:55:02.007707   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:02.008164   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:02.008201   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:02.008510   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:02.008692   26367 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:55:02.011388   26367 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:02.011777   26367 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:02.011809   26367 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:02.011941   26367 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:02.012248   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:02.012282   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:02.025963   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43641
	I0318 20:55:02.026378   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:02.026855   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:02.026873   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:02.027216   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:02.027419   26367 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:55:02.027588   26367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:02.027608   26367 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:55:02.030068   26367 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:02.030553   26367 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:02.030578   26367 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:02.030971   26367 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:55:02.031127   26367 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:55:02.031292   26367 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:55:02.031433   26367 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:55:03.753167   26367 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:03.753232   26367 retry.go:31] will retry after 131.138499ms: dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:06.825102   26367 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:06.825196   26367 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:55:06.825224   26367 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:06.825232   26367 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:55:06.825251   26367 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:06.825258   26367 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:06.827260   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:06.827306   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:06.842350   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0318 20:55:06.842753   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:06.843264   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:06.843288   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:06.843639   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:06.843857   26367 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:06.845476   26367 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:06.845494   26367 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:06.845865   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:06.845914   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:06.859947   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0318 20:55:06.860316   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:06.860806   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:06.860828   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:06.861188   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:06.861396   26367 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:06.864080   26367 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:06.864496   26367 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:06.864516   26367 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:06.864627   26367 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:06.864942   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:06.864993   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:06.879220   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
	I0318 20:55:06.879599   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:06.880037   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:06.880060   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:06.880362   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:06.880582   26367 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:06.880772   26367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:06.880792   26367 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:06.883234   26367 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:06.883702   26367 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:06.883722   26367 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:06.883910   26367 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:06.884062   26367 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:06.884202   26367 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:06.884322   26367 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:06.965407   26367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:06.985300   26367 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:06.985323   26367 api_server.go:166] Checking apiserver status ...
	I0318 20:55:06.985365   26367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:07.006420   26367 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:07.020055   26367 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:07.020104   26367 ssh_runner.go:195] Run: ls
	I0318 20:55:07.026221   26367 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:07.032778   26367 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:07.032800   26367 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:07.032810   26367 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:07.032824   26367 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:07.033147   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:07.033178   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:07.048156   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44431
	I0318 20:55:07.048596   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:07.049211   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:07.049240   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:07.049578   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:07.049783   26367 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:07.051399   26367 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:07.051417   26367 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:07.051682   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:07.051712   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:07.065804   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I0318 20:55:07.066154   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:07.066625   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:07.066650   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:07.066997   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:07.067174   26367 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:07.070030   26367 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:07.070469   26367 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:07.070491   26367 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:07.070652   26367 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:07.070919   26367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:07.070958   26367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:07.084795   26367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0318 20:55:07.085231   26367 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:07.085674   26367 main.go:141] libmachine: Using API Version  1
	I0318 20:55:07.085690   26367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:07.085971   26367 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:07.086162   26367 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:07.086345   26367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:07.086366   26367 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:07.088758   26367 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:07.089216   26367 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:07.089239   26367 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:07.089382   26367 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:07.089537   26367 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:07.089667   26367 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:07.089785   26367 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:07.172991   26367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:07.188989   26367 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (4.391013487s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:09.147845   26463 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:09.148097   26463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:09.148108   26463 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:09.148114   26463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:09.148328   26463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:09.148523   26463 out.go:298] Setting JSON to false
	I0318 20:55:09.148554   26463 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:09.148677   26463 notify.go:220] Checking for updates...
	I0318 20:55:09.149068   26463 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:09.149086   26463 status.go:255] checking status of ha-315064 ...
	I0318 20:55:09.149588   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:09.149649   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:09.166271   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0318 20:55:09.166670   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:09.167300   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:09.167351   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:09.167684   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:09.167872   26463 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:09.169568   26463 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:09.169598   26463 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:09.169881   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:09.169940   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:09.184420   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44541
	I0318 20:55:09.184812   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:09.185248   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:09.185274   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:09.185573   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:09.185746   26463 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:09.188271   26463 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:09.188706   26463 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:09.188740   26463 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:09.188847   26463 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:09.189152   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:09.189201   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:09.203520   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0318 20:55:09.203950   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:09.204403   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:09.204427   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:09.204696   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:09.204864   26463 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:09.205036   26463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:09.205056   26463 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:09.207678   26463 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:09.208124   26463 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:09.208161   26463 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:09.208317   26463 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:09.208500   26463 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:09.208654   26463 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:09.208824   26463 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:09.295605   26463 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:09.303587   26463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:09.320846   26463 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:09.320876   26463 api_server.go:166] Checking apiserver status ...
	I0318 20:55:09.320930   26463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:09.338854   26463 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:09.349897   26463 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:09.349950   26463 ssh_runner.go:195] Run: ls
	I0318 20:55:09.355697   26463 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:09.360393   26463 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:09.360417   26463 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:09.360425   26463 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:09.360445   26463 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:09.360760   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:09.360821   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:09.375451   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0318 20:55:09.375998   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:09.376487   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:09.376510   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:09.376805   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:09.376993   26463 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:09.378638   26463 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:55:09.378652   26463 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:09.378918   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:09.378949   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:09.393198   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
	I0318 20:55:09.393559   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:09.394040   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:09.394061   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:09.394407   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:09.394627   26463 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:55:09.397289   26463 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:09.397612   26463 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:09.397650   26463 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:09.397770   26463 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:09.398053   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:09.398084   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:09.413355   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0318 20:55:09.413790   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:09.414237   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:09.414263   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:09.414588   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:09.414810   26463 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:55:09.415003   26463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:09.415022   26463 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:55:09.418169   26463 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:09.418690   26463 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:09.418772   26463 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:09.419029   26463 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:55:09.419216   26463 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:55:09.419399   26463 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:55:09.419577   26463 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:55:09.893081   26463 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:09.893131   26463 retry.go:31] will retry after 164.515332ms: dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:13.125157   26463 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:13.125259   26463 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:55:13.125283   26463 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:13.125294   26463 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:55:13.125326   26463 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:13.125339   26463 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:13.126123   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:13.126167   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:13.142186   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0318 20:55:13.142608   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:13.143100   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:13.143127   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:13.143475   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:13.143675   26463 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:13.145390   26463 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:13.145409   26463 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:13.145679   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:13.145713   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:13.160264   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0318 20:55:13.160654   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:13.161134   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:13.161150   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:13.161520   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:13.161709   26463 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:13.164947   26463 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:13.165430   26463 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:13.165466   26463 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:13.165709   26463 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:13.166026   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:13.166070   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:13.180454   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0318 20:55:13.180868   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:13.181324   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:13.181347   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:13.181617   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:13.181817   26463 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:13.181987   26463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:13.182007   26463 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:13.184177   26463 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:13.184622   26463 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:13.184645   26463 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:13.184795   26463 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:13.184972   26463 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:13.185172   26463 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:13.185311   26463 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:13.269126   26463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:13.284617   26463 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:13.284644   26463 api_server.go:166] Checking apiserver status ...
	I0318 20:55:13.284690   26463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:13.299046   26463 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:13.308804   26463 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:13.308856   26463 ssh_runner.go:195] Run: ls
	I0318 20:55:13.313586   26463 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:13.320736   26463 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:13.320758   26463 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:13.320770   26463 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:13.320790   26463 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:13.321113   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:13.321155   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:13.335760   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0318 20:55:13.336263   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:13.336735   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:13.336756   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:13.337075   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:13.337279   26463 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:13.338620   26463 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:13.338636   26463 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:13.338916   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:13.338949   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:13.353442   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0318 20:55:13.353807   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:13.354234   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:13.354259   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:13.354590   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:13.354793   26463 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:13.357540   26463 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:13.357951   26463 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:13.357979   26463 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:13.358113   26463 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:13.358384   26463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:13.358416   26463 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:13.373929   26463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41611
	I0318 20:55:13.374342   26463 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:13.374881   26463 main.go:141] libmachine: Using API Version  1
	I0318 20:55:13.374909   26463 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:13.375294   26463 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:13.375494   26463 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:13.375710   26463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:13.375733   26463 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:13.378357   26463 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:13.378796   26463 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:13.378834   26463 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:13.378956   26463 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:13.379109   26463 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:13.379231   26463 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:13.379402   26463 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:13.466408   26463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:13.483629   26463 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0318 20:55:14.158286   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (3.762242077s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:16.430671   26569 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:16.430786   26569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:16.430793   26569 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:16.430799   26569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:16.431066   26569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:16.431250   26569 out.go:298] Setting JSON to false
	I0318 20:55:16.431276   26569 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:16.431391   26569 notify.go:220] Checking for updates...
	I0318 20:55:16.431620   26569 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:16.431632   26569 status.go:255] checking status of ha-315064 ...
	I0318 20:55:16.432013   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:16.432126   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:16.448596   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I0318 20:55:16.451627   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:16.452238   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:16.452271   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:16.452690   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:16.452887   26569 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:16.454569   26569 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:16.454598   26569 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:16.454998   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:16.455043   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:16.469797   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0318 20:55:16.470268   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:16.470733   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:16.470754   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:16.471096   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:16.471277   26569 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:16.474168   26569 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:16.474584   26569 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:16.474616   26569 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:16.474757   26569 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:16.475150   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:16.475193   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:16.489717   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0318 20:55:16.490114   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:16.490579   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:16.490601   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:16.490892   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:16.491080   26569 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:16.491294   26569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:16.491326   26569 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:16.493730   26569 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:16.494113   26569 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:16.494158   26569 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:16.494331   26569 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:16.494504   26569 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:16.494670   26569 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:16.494805   26569 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:16.577621   26569 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:16.587802   26569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:16.608517   26569 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:16.608541   26569 api_server.go:166] Checking apiserver status ...
	I0318 20:55:16.608575   26569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:16.627141   26569 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:16.645661   26569 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:16.645723   26569 ssh_runner.go:195] Run: ls
	I0318 20:55:16.651910   26569 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:16.657557   26569 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:16.657576   26569 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:16.657585   26569 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:16.657599   26569 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:16.657905   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:16.657940   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:16.672223   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0318 20:55:16.672645   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:16.673096   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:16.673116   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:16.673542   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:16.673748   26569 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:16.675120   26569 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:55:16.675136   26569 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:16.675588   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:16.675633   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:16.691177   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44473
	I0318 20:55:16.691534   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:16.691992   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:16.692017   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:16.692315   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:16.692494   26569 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:55:16.695478   26569 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:16.695882   26569 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:16.695903   26569 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:16.696037   26569 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:16.696445   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:16.696490   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:16.711165   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I0318 20:55:16.711525   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:16.712032   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:16.712053   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:16.712426   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:16.712621   26569 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:55:16.712820   26569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:16.712844   26569 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:55:16.715370   26569 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:16.715724   26569 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:16.715759   26569 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:16.715825   26569 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:55:16.716002   26569 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:55:16.716141   26569 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:55:16.716239   26569 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:55:19.781137   26569 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:19.781213   26569 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:55:19.781230   26569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:19.781243   26569 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:55:19.781282   26569 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:19.781289   26569 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:19.781604   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:19.781656   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:19.796382   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0318 20:55:19.796799   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:19.797272   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:19.797294   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:19.797641   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:19.797843   26569 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:19.799384   26569 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:19.799405   26569 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:19.799719   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:19.799753   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:19.814052   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I0318 20:55:19.814406   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:19.814865   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:19.814888   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:19.815159   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:19.815353   26569 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:19.817931   26569 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:19.818271   26569 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:19.818296   26569 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:19.818405   26569 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:19.818738   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:19.818778   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:19.832567   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42161
	I0318 20:55:19.832958   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:19.833393   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:19.833413   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:19.833697   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:19.833866   26569 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:19.834066   26569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:19.834086   26569 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:19.836731   26569 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:19.837121   26569 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:19.837146   26569 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:19.837259   26569 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:19.837430   26569 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:19.837555   26569 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:19.837693   26569 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:19.924970   26569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:19.940381   26569 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:19.940402   26569 api_server.go:166] Checking apiserver status ...
	I0318 20:55:19.940441   26569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:19.955582   26569 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:19.965456   26569 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:19.965563   26569 ssh_runner.go:195] Run: ls
	I0318 20:55:19.970367   26569 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:19.977750   26569 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:19.977771   26569 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:19.977783   26569 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:19.977802   26569 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:19.978101   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:19.978156   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:19.992507   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I0318 20:55:19.992987   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:19.993454   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:19.993491   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:19.993834   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:19.994053   26569 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:19.995764   26569 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:19.995782   26569 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:19.996046   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:19.996088   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:20.012232   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I0318 20:55:20.012657   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:20.013178   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:20.013201   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:20.013505   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:20.013690   26569 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:20.016449   26569 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:20.016940   26569 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:20.016971   26569 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:20.017125   26569 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:20.017541   26569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:20.017586   26569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:20.031721   26569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0318 20:55:20.032148   26569 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:20.032563   26569 main.go:141] libmachine: Using API Version  1
	I0318 20:55:20.032584   26569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:20.032868   26569 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:20.033057   26569 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:20.033279   26569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:20.033303   26569 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:20.035976   26569 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:20.036393   26569 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:20.036416   26569 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:20.036547   26569 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:20.036688   26569 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:20.036844   26569 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:20.036995   26569 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:20.120960   26569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:20.138035   26569 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0318 20:55:23.236687   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (3.750295985s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:23.851852   26664 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:23.852095   26664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:23.852104   26664 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:23.852108   26664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:23.852298   26664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:23.852468   26664 out.go:298] Setting JSON to false
	I0318 20:55:23.852491   26664 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:23.852612   26664 notify.go:220] Checking for updates...
	I0318 20:55:23.852838   26664 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:23.852852   26664 status.go:255] checking status of ha-315064 ...
	I0318 20:55:23.853241   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:23.853294   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:23.868290   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0318 20:55:23.868781   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:23.869350   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:23.869371   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:23.869714   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:23.869934   26664 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:23.871514   26664 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:23.871532   26664 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:23.871836   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:23.871882   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:23.886053   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I0318 20:55:23.886408   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:23.886848   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:23.886874   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:23.887188   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:23.887363   26664 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:23.889900   26664 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:23.890335   26664 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:23.890372   26664 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:23.890509   26664 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:23.890775   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:23.890807   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:23.906644   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0318 20:55:23.907009   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:23.907448   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:23.907482   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:23.907794   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:23.907959   26664 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:23.908138   26664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:23.908169   26664 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:23.910626   26664 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:23.911006   26664 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:23.911037   26664 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:23.911178   26664 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:23.911352   26664 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:23.911502   26664 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:23.911642   26664 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:23.994501   26664 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:24.002541   26664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:24.019915   26664 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:24.019942   26664 api_server.go:166] Checking apiserver status ...
	I0318 20:55:24.019976   26664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:24.037411   26664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:24.049063   26664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:24.049137   26664 ssh_runner.go:195] Run: ls
	I0318 20:55:24.054619   26664 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:24.061386   26664 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:24.061410   26664 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:24.061419   26664 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:24.061434   26664 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:24.061726   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:24.061764   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:24.076255   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I0318 20:55:24.076701   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:24.077247   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:24.077269   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:24.077553   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:24.077759   26664 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:24.079256   26664 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:55:24.079281   26664 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:24.079594   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:24.079633   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:24.093891   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0318 20:55:24.094257   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:24.094683   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:24.094702   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:24.095078   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:24.095251   26664 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:55:24.097854   26664 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:24.098290   26664 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:24.098313   26664 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:24.098466   26664 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:24.098738   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:24.098770   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:24.113396   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0318 20:55:24.113788   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:24.114209   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:24.114235   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:24.114553   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:24.114743   26664 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:55:24.114928   26664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:24.114948   26664 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:55:24.117371   26664 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:24.117753   26664 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:24.117783   26664 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:24.117889   26664 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:55:24.118056   26664 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:55:24.118207   26664 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:55:24.118337   26664 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:55:27.177129   26664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:27.177246   26664 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:55:27.177270   26664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:27.177283   26664 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:55:27.177301   26664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:27.177311   26664 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:27.177696   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:27.177754   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:27.193085   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0318 20:55:27.193491   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:27.194002   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:27.194030   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:27.194378   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:27.194584   26664 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:27.196150   26664 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:27.196167   26664 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:27.196463   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:27.196495   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:27.210825   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39021
	I0318 20:55:27.211189   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:27.211648   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:27.211676   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:27.211971   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:27.212139   26664 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:27.214819   26664 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:27.215252   26664 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:27.215278   26664 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:27.215391   26664 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:27.215653   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:27.215688   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:27.231022   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0318 20:55:27.231389   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:27.231842   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:27.231866   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:27.232207   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:27.232395   26664 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:27.232592   26664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:27.232619   26664 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:27.235349   26664 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:27.235752   26664 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:27.235781   26664 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:27.235877   26664 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:27.236033   26664 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:27.236181   26664 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:27.236295   26664 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:27.321870   26664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:27.342908   26664 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:27.342931   26664 api_server.go:166] Checking apiserver status ...
	I0318 20:55:27.342958   26664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:27.359992   26664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:27.376415   26664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:27.376467   26664 ssh_runner.go:195] Run: ls
	I0318 20:55:27.381519   26664 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:27.388276   26664 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:27.388298   26664 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:27.388306   26664 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:27.388320   26664 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:27.388643   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:27.388677   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:27.403636   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0318 20:55:27.404019   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:27.404466   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:27.404492   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:27.404764   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:27.404961   26664 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:27.406670   26664 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:27.406688   26664 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:27.407061   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:27.407106   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:27.421138   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0318 20:55:27.421482   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:27.421967   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:27.421988   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:27.422272   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:27.422448   26664 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:27.425015   26664 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:27.425416   26664 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:27.425434   26664 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:27.425567   26664 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:27.425881   26664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:27.425940   26664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:27.439867   26664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I0318 20:55:27.440189   26664 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:27.440582   26664 main.go:141] libmachine: Using API Version  1
	I0318 20:55:27.440600   26664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:27.440893   26664 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:27.441097   26664 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:27.441272   26664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:27.441291   26664 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:27.443742   26664 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:27.444135   26664 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:27.444160   26664 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:27.444275   26664 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:27.444452   26664 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:27.444575   26664 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:27.444686   26664 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:27.529245   26664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:27.544877   26664 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (3.760563902s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:30.851953   26771 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:30.852451   26771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:30.852469   26771 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:30.852476   26771 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:30.852969   26771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:30.853303   26771 out.go:298] Setting JSON to false
	I0318 20:55:30.853358   26771 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:30.853477   26771 notify.go:220] Checking for updates...
	I0318 20:55:30.854176   26771 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:30.854199   26771 status.go:255] checking status of ha-315064 ...
	I0318 20:55:30.854644   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:30.854715   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:30.869545   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0318 20:55:30.869957   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:30.870601   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:30.870635   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:30.870980   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:30.871205   26771 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:30.872915   26771 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:30.872933   26771 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:30.873292   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:30.873337   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:30.888434   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0318 20:55:30.888887   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:30.889334   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:30.889355   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:30.889693   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:30.889863   26771 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:30.892564   26771 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:30.893068   26771 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:30.893091   26771 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:30.893188   26771 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:30.893480   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:30.893524   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:30.907708   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0318 20:55:30.908104   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:30.908505   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:30.908523   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:30.908825   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:30.909023   26771 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:30.909232   26771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:30.909265   26771 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:30.911765   26771 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:30.912165   26771 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:30.912219   26771 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:30.912386   26771 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:30.912549   26771 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:30.912684   26771 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:30.912782   26771 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:30.993769   26771 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:31.000635   26771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:31.020885   26771 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:31.020935   26771 api_server.go:166] Checking apiserver status ...
	I0318 20:55:31.020974   26771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:31.037907   26771 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:31.049411   26771 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:31.049458   26771 ssh_runner.go:195] Run: ls
	I0318 20:55:31.054596   26771 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:31.061174   26771 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:31.061194   26771 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:31.061206   26771 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:31.061225   26771 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:31.061537   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:31.061577   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:31.075859   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I0318 20:55:31.076282   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:31.076769   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:31.076794   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:31.077123   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:31.077311   26771 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:31.078856   26771 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 20:55:31.078872   26771 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:31.079168   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:31.079207   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:31.093519   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34573
	I0318 20:55:31.093832   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:31.094229   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:31.094249   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:31.094532   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:31.094757   26771 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:55:31.097289   26771 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:31.097664   26771 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:31.097699   26771 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:31.097818   26771 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 20:55:31.098090   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:31.098120   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:31.112146   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I0318 20:55:31.112528   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:31.113048   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:31.113065   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:31.113351   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:31.113535   26771 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:55:31.113755   26771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:31.113776   26771 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:55:31.116351   26771 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:31.116753   26771 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:55:31.116783   26771 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:55:31.116924   26771 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:55:31.117069   26771 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:55:31.117222   26771 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:55:31.117370   26771 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	W0318 20:55:34.181113   26771 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.231:22: connect: no route to host
	W0318 20:55:34.181200   26771 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0318 20:55:34.181217   26771 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:34.181226   26771 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 20:55:34.181253   26771 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	I0318 20:55:34.181260   26771 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:34.181714   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:34.181778   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:34.196497   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0318 20:55:34.196997   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:34.197457   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:34.197478   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:34.197785   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:34.198029   26771 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:34.199793   26771 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:34.199809   26771 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:34.200197   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:34.200240   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:34.214561   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I0318 20:55:34.214931   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:34.215374   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:34.215397   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:34.215682   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:34.215880   26771 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:34.218553   26771 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:34.218934   26771 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:34.218961   26771 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:34.219058   26771 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:34.219447   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:34.219492   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:34.234144   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0318 20:55:34.234503   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:34.234944   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:34.234963   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:34.235279   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:34.235472   26771 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:34.235644   26771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:34.235664   26771 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:34.238385   26771 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:34.238848   26771 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:34.238879   26771 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:34.239009   26771 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:34.239153   26771 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:34.239304   26771 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:34.239441   26771 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:34.325736   26771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:34.344259   26771 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:34.344298   26771 api_server.go:166] Checking apiserver status ...
	I0318 20:55:34.344334   26771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:34.361442   26771 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:34.379512   26771 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:34.379562   26771 ssh_runner.go:195] Run: ls
	I0318 20:55:34.385098   26771 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:34.392134   26771 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:34.392159   26771 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:34.392170   26771 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:34.392189   26771 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:34.392538   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:34.392572   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:34.409697   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I0318 20:55:34.410194   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:34.410642   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:34.410662   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:34.410969   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:34.411156   26771 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:34.412869   26771 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:34.412890   26771 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:34.413183   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:34.413217   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:34.428650   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0318 20:55:34.429111   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:34.429650   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:34.429675   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:34.430029   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:34.430255   26771 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:34.433062   26771 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:34.433465   26771 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:34.433506   26771 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:34.433660   26771 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:34.434008   26771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:34.434054   26771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:34.449476   26771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0318 20:55:34.449831   26771 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:34.450265   26771 main.go:141] libmachine: Using API Version  1
	I0318 20:55:34.450286   26771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:34.450581   26771 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:34.450763   26771 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:34.450939   26771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:34.450960   26771 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:34.453876   26771 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:34.454349   26771 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:34.454382   26771 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:34.454524   26771 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:34.454667   26771 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:34.454821   26771 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:34.454949   26771 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:34.537762   26771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:34.554603   26771 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 7 (622.749488ms)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:39.222192   26889 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:39.222295   26889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:39.222304   26889 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:39.222307   26889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:39.222482   26889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:39.222624   26889 out.go:298] Setting JSON to false
	I0318 20:55:39.222648   26889 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:39.222780   26889 notify.go:220] Checking for updates...
	I0318 20:55:39.223000   26889 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:39.223015   26889 status.go:255] checking status of ha-315064 ...
	I0318 20:55:39.223423   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.223482   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.240350   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0318 20:55:39.240797   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.241346   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.241366   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.241728   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.241897   26889 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:39.243441   26889 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:39.243458   26889 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:39.243843   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.243886   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.258873   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I0318 20:55:39.259220   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.259714   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.259764   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.260124   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.260297   26889 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:39.262999   26889 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:39.263456   26889 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:39.263489   26889 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:39.263561   26889 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:39.263835   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.263876   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.277496   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0318 20:55:39.277859   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.278253   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.278275   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.278544   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.278735   26889 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:39.278913   26889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:39.278940   26889 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:39.281396   26889 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:39.281742   26889 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:39.281760   26889 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:39.281976   26889 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:39.282129   26889 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:39.282243   26889 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:39.282344   26889 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:39.362026   26889 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:39.369579   26889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:39.385992   26889 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:39.386019   26889 api_server.go:166] Checking apiserver status ...
	I0318 20:55:39.386064   26889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:39.404396   26889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:39.417162   26889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:39.417206   26889 ssh_runner.go:195] Run: ls
	I0318 20:55:39.422214   26889 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:39.428483   26889 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:39.428506   26889 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:39.428519   26889 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:39.428541   26889 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:39.428920   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.428967   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.443368   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0318 20:55:39.443784   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.444232   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.444256   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.444587   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.444798   26889 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:39.446278   26889 status.go:330] ha-315064-m02 host status = "Stopped" (err=<nil>)
	I0318 20:55:39.446293   26889 status.go:343] host is not running, skipping remaining checks
	I0318 20:55:39.446301   26889 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:39.446321   26889 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:39.446622   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.446660   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.460602   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39711
	I0318 20:55:39.460994   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.461458   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.461490   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.461755   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.461944   26889 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:39.463343   26889 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:39.463359   26889 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:39.463703   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.463740   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.477495   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0318 20:55:39.477896   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.478296   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.478322   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.478664   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.478853   26889 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:39.481399   26889 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:39.481735   26889 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:39.481762   26889 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:39.481867   26889 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:39.482165   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.482195   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.495950   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0318 20:55:39.496365   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.496752   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.496773   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.497119   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.497300   26889 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:39.497470   26889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:39.497491   26889 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:39.499893   26889 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:39.500241   26889 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:39.500260   26889 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:39.500390   26889 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:39.500536   26889 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:39.500680   26889 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:39.500816   26889 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:39.581036   26889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:39.595706   26889 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:39.595731   26889 api_server.go:166] Checking apiserver status ...
	I0318 20:55:39.595773   26889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:39.609787   26889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:39.620161   26889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:39.620216   26889 ssh_runner.go:195] Run: ls
	I0318 20:55:39.625012   26889 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:39.629899   26889 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:39.629920   26889 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:39.629930   26889 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:39.629947   26889 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:39.630233   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.630271   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.645836   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0318 20:55:39.646185   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.646631   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.646662   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.646964   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.647140   26889 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:39.648555   26889 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:39.648568   26889 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:39.648831   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.648861   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.662746   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0318 20:55:39.663172   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.663693   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.663713   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.663989   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.664174   26889 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:39.666925   26889 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:39.667355   26889 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:39.667378   26889 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:39.667551   26889 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:39.667910   26889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:39.667948   26889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:39.682312   26889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0318 20:55:39.682705   26889 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:39.683207   26889 main.go:141] libmachine: Using API Version  1
	I0318 20:55:39.683230   26889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:39.683558   26889 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:39.683773   26889 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:39.684000   26889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:39.684025   26889 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:39.686867   26889 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:39.687294   26889 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:39.687323   26889 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:39.687510   26889 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:39.687692   26889 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:39.687880   26889 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:39.688024   26889 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:39.773441   26889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:39.790016   26889 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0318 20:55:50.920651   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 7 (657.015319ms)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-315064-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:53.440490   26993 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:53.440740   26993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:53.440751   26993 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:53.440756   26993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:53.440954   26993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:53.441130   26993 out.go:298] Setting JSON to false
	I0318 20:55:53.441156   26993 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:53.441202   26993 notify.go:220] Checking for updates...
	I0318 20:55:53.441686   26993 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:53.441709   26993 status.go:255] checking status of ha-315064 ...
	I0318 20:55:53.442130   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.442190   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.459131   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0318 20:55:53.459526   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.460048   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.460070   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.460492   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.460716   26993 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:55:53.462194   26993 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 20:55:53.462209   26993 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:53.462458   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.462499   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.477131   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
	I0318 20:55:53.477493   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.477980   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.477999   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.478387   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.478586   26993 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:55:53.481420   26993 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:53.481800   26993 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:53.481836   26993 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:53.481926   26993 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:55:53.482215   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.482261   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.496209   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0318 20:55:53.496580   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.497039   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.497061   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.497339   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.497527   26993 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:55:53.497696   26993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:53.497716   26993 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:55:53.500315   26993 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:53.500686   26993 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:55:53.500719   26993 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:55:53.500834   26993 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:55:53.501020   26993 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:55:53.501171   26993 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:55:53.501300   26993 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:55:53.590985   26993 ssh_runner.go:195] Run: systemctl --version
	I0318 20:55:53.599379   26993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:53.628429   26993 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:53.628455   26993 api_server.go:166] Checking apiserver status ...
	I0318 20:55:53.628493   26993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:53.645979   26993 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W0318 20:55:53.657148   26993 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:53.657203   26993 ssh_runner.go:195] Run: ls
	I0318 20:55:53.662692   26993 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:53.667689   26993 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:53.667708   26993 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 20:55:53.667715   26993 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:53.667733   26993 status.go:255] checking status of ha-315064-m02 ...
	I0318 20:55:53.668015   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.668048   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.683183   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35405
	I0318 20:55:53.683620   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.684082   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.684099   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.684386   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.684584   26993 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:55:53.686074   26993 status.go:330] ha-315064-m02 host status = "Stopped" (err=<nil>)
	I0318 20:55:53.686087   26993 status.go:343] host is not running, skipping remaining checks
	I0318 20:55:53.686093   26993 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:53.686114   26993 status.go:255] checking status of ha-315064-m03 ...
	I0318 20:55:53.686941   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.687033   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.702080   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0318 20:55:53.702558   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.703017   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.703040   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.703383   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.703577   26993 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:53.705071   26993 status.go:330] ha-315064-m03 host status = "Running" (err=<nil>)
	I0318 20:55:53.705088   26993 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:53.705450   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.705489   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.719377   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0318 20:55:53.719701   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.720095   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.720115   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.720440   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.720601   26993 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:55:53.723167   26993 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:53.723593   26993 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:53.723630   26993 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:53.723757   26993 host.go:66] Checking if "ha-315064-m03" exists ...
	I0318 20:55:53.724029   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.724061   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.738492   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0318 20:55:53.738845   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.739227   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.739249   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.739538   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.739680   26993 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:53.739857   26993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:53.739884   26993 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:53.742636   26993 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:53.743040   26993 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:53.743061   26993 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:53.743202   26993 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:53.743339   26993 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:53.743463   26993 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:53.743571   26993 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:53.827787   26993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:53.844820   26993 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 20:55:53.844844   26993 api_server.go:166] Checking apiserver status ...
	I0318 20:55:53.844883   26993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:55:53.862028   26993 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup
	W0318 20:55:53.873537   26993 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1503/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 20:55:53.873582   26993 ssh_runner.go:195] Run: ls
	I0318 20:55:53.878925   26993 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 20:55:53.883636   26993 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 20:55:53.883655   26993 status.go:422] ha-315064-m03 apiserver status = Running (err=<nil>)
	I0318 20:55:53.883665   26993 status.go:257] ha-315064-m03 status: &{Name:ha-315064-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 20:55:53.883685   26993 status.go:255] checking status of ha-315064-m04 ...
	I0318 20:55:53.883955   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.883995   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.898115   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I0318 20:55:53.898593   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.899078   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.899097   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.899385   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.899558   26993 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:53.900956   26993 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 20:55:53.900972   26993 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:53.901253   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.901291   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.916517   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35047
	I0318 20:55:53.916870   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.917282   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.917305   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.917651   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.917865   26993 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 20:55:53.920410   26993 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:53.920780   26993 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:53.920806   26993 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:53.920961   26993 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 20:55:53.921218   26993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:53.921250   26993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:53.936140   26993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0318 20:55:53.936573   26993 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:53.937056   26993 main.go:141] libmachine: Using API Version  1
	I0318 20:55:53.937075   26993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:53.937367   26993 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:53.937538   26993 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:53.937738   26993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 20:55:53.937761   26993 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:53.940337   26993 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:53.940786   26993 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:53.940814   26993 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:53.940980   26993 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:53.941149   26993 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:53.941292   26993 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:53.941402   26993 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:54.025539   26993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:55:54.042900   26993 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-315064 -n ha-315064
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-315064 logs -n 25: (1.525144306s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064:/home/docker/cp-test_ha-315064-m03_ha-315064.txt                      |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064 sudo cat                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064.txt                                |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m04 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp testdata/cp-test.txt                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064:/home/docker/cp-test_ha-315064-m04_ha-315064.txt                      |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064 sudo cat                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064.txt                                |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03:/home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m03 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-315064 node stop m02 -v=7                                                    | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-315064 node start m02 -v=7                                                   | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:54 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:46:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:46:21.885782   21691 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:46:21.885913   21691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:46:21.885922   21691 out.go:304] Setting ErrFile to fd 2...
	I0318 20:46:21.885925   21691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:46:21.886118   21691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:46:21.886685   21691 out.go:298] Setting JSON to false
	I0318 20:46:21.887530   21691 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1726,"bootTime":1710793056,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:46:21.887590   21691 start.go:139] virtualization: kvm guest
	I0318 20:46:21.889402   21691 out.go:177] * [ha-315064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:46:21.890735   21691 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:46:21.891888   21691 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:46:21.890792   21691 notify.go:220] Checking for updates...
	I0318 20:46:21.894112   21691 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:46:21.895264   21691 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:46:21.896403   21691 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:46:21.897538   21691 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:46:21.898928   21691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:46:21.931371   21691 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 20:46:21.932613   21691 start.go:297] selected driver: kvm2
	I0318 20:46:21.932627   21691 start.go:901] validating driver "kvm2" against <nil>
	I0318 20:46:21.932639   21691 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:46:21.933394   21691 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:46:21.933464   21691 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:46:21.947602   21691 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:46:21.947657   21691 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 20:46:21.947851   21691 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:46:21.947906   21691 cni.go:84] Creating CNI manager for ""
	I0318 20:46:21.947917   21691 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 20:46:21.947922   21691 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 20:46:21.947978   21691 start.go:340] cluster config:
	{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0318 20:46:21.948058   21691 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:46:21.949771   21691 out.go:177] * Starting "ha-315064" primary control-plane node in "ha-315064" cluster
	I0318 20:46:21.950997   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:46:21.951024   21691 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:46:21.951030   21691 cache.go:56] Caching tarball of preloaded images
	I0318 20:46:21.951097   21691 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:46:21.951108   21691 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:46:21.951385   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:46:21.951403   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json: {Name:mk3e2c3521eb14f618d4105d084216970f5e6904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:21.951519   21691 start.go:360] acquireMachinesLock for ha-315064: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:46:21.951546   21691 start.go:364] duration metric: took 13.923µs to acquireMachinesLock for "ha-315064"
	I0318 20:46:21.951561   21691 start.go:93] Provisioning new machine with config: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:46:21.951619   21691 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 20:46:21.953226   21691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 20:46:21.953371   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:46:21.953402   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:46:21.966804   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0318 20:46:21.967173   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:46:21.967702   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:46:21.967731   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:46:21.968037   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:46:21.968206   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:21.968333   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:21.968476   21691 start.go:159] libmachine.API.Create for "ha-315064" (driver="kvm2")
	I0318 20:46:21.968502   21691 client.go:168] LocalClient.Create starting
	I0318 20:46:21.968533   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:46:21.968570   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:46:21.968594   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:46:21.968663   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:46:21.968687   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:46:21.968713   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:46:21.968761   21691 main.go:141] libmachine: Running pre-create checks...
	I0318 20:46:21.968775   21691 main.go:141] libmachine: (ha-315064) Calling .PreCreateCheck
	I0318 20:46:21.969084   21691 main.go:141] libmachine: (ha-315064) Calling .GetConfigRaw
	I0318 20:46:21.969413   21691 main.go:141] libmachine: Creating machine...
	I0318 20:46:21.969426   21691 main.go:141] libmachine: (ha-315064) Calling .Create
	I0318 20:46:21.969543   21691 main.go:141] libmachine: (ha-315064) Creating KVM machine...
	I0318 20:46:21.970696   21691 main.go:141] libmachine: (ha-315064) DBG | found existing default KVM network
	I0318 20:46:21.971295   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:21.971171   21714 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012b980}
	I0318 20:46:21.971315   21691 main.go:141] libmachine: (ha-315064) DBG | created network xml: 
	I0318 20:46:21.971327   21691 main.go:141] libmachine: (ha-315064) DBG | <network>
	I0318 20:46:21.971343   21691 main.go:141] libmachine: (ha-315064) DBG |   <name>mk-ha-315064</name>
	I0318 20:46:21.971376   21691 main.go:141] libmachine: (ha-315064) DBG |   <dns enable='no'/>
	I0318 20:46:21.971398   21691 main.go:141] libmachine: (ha-315064) DBG |   
	I0318 20:46:21.971414   21691 main.go:141] libmachine: (ha-315064) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 20:46:21.971426   21691 main.go:141] libmachine: (ha-315064) DBG |     <dhcp>
	I0318 20:46:21.971440   21691 main.go:141] libmachine: (ha-315064) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 20:46:21.971451   21691 main.go:141] libmachine: (ha-315064) DBG |     </dhcp>
	I0318 20:46:21.971461   21691 main.go:141] libmachine: (ha-315064) DBG |   </ip>
	I0318 20:46:21.971477   21691 main.go:141] libmachine: (ha-315064) DBG |   
	I0318 20:46:21.971489   21691 main.go:141] libmachine: (ha-315064) DBG | </network>
	I0318 20:46:21.971500   21691 main.go:141] libmachine: (ha-315064) DBG | 
	I0318 20:46:21.975746   21691 main.go:141] libmachine: (ha-315064) DBG | trying to create private KVM network mk-ha-315064 192.168.39.0/24...
	I0318 20:46:22.036788   21691 main.go:141] libmachine: (ha-315064) DBG | private KVM network mk-ha-315064 192.168.39.0/24 created
	I0318 20:46:22.036812   21691 main.go:141] libmachine: (ha-315064) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064 ...
	I0318 20:46:22.036829   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.036773   21714 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:46:22.036858   21691 main.go:141] libmachine: (ha-315064) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:46:22.036926   21691 main.go:141] libmachine: (ha-315064) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:46:22.262603   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.262489   21714 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa...
	I0318 20:46:22.442782   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.442650   21714 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/ha-315064.rawdisk...
	I0318 20:46:22.442819   21691 main.go:141] libmachine: (ha-315064) DBG | Writing magic tar header
	I0318 20:46:22.442832   21691 main.go:141] libmachine: (ha-315064) DBG | Writing SSH key tar header
	I0318 20:46:22.442848   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:22.442813   21714 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064 ...
	I0318 20:46:22.442954   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064
	I0318 20:46:22.442984   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:46:22.442994   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064 (perms=drwx------)
	I0318 20:46:22.443010   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:46:22.443026   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:46:22.443035   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:46:22.443050   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:46:22.443059   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:46:22.443071   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:46:22.443089   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:46:22.443100   21691 main.go:141] libmachine: (ha-315064) DBG | Checking permissions on dir: /home
	I0318 20:46:22.443111   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:46:22.443126   21691 main.go:141] libmachine: (ha-315064) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:46:22.443136   21691 main.go:141] libmachine: (ha-315064) Creating domain...
	I0318 20:46:22.443145   21691 main.go:141] libmachine: (ha-315064) DBG | Skipping /home - not owner
	I0318 20:46:22.444100   21691 main.go:141] libmachine: (ha-315064) define libvirt domain using xml: 
	I0318 20:46:22.444116   21691 main.go:141] libmachine: (ha-315064) <domain type='kvm'>
	I0318 20:46:22.444122   21691 main.go:141] libmachine: (ha-315064)   <name>ha-315064</name>
	I0318 20:46:22.444130   21691 main.go:141] libmachine: (ha-315064)   <memory unit='MiB'>2200</memory>
	I0318 20:46:22.444136   21691 main.go:141] libmachine: (ha-315064)   <vcpu>2</vcpu>
	I0318 20:46:22.444140   21691 main.go:141] libmachine: (ha-315064)   <features>
	I0318 20:46:22.444145   21691 main.go:141] libmachine: (ha-315064)     <acpi/>
	I0318 20:46:22.444149   21691 main.go:141] libmachine: (ha-315064)     <apic/>
	I0318 20:46:22.444155   21691 main.go:141] libmachine: (ha-315064)     <pae/>
	I0318 20:46:22.444161   21691 main.go:141] libmachine: (ha-315064)     
	I0318 20:46:22.444168   21691 main.go:141] libmachine: (ha-315064)   </features>
	I0318 20:46:22.444178   21691 main.go:141] libmachine: (ha-315064)   <cpu mode='host-passthrough'>
	I0318 20:46:22.444187   21691 main.go:141] libmachine: (ha-315064)   
	I0318 20:46:22.444193   21691 main.go:141] libmachine: (ha-315064)   </cpu>
	I0318 20:46:22.444216   21691 main.go:141] libmachine: (ha-315064)   <os>
	I0318 20:46:22.444233   21691 main.go:141] libmachine: (ha-315064)     <type>hvm</type>
	I0318 20:46:22.444243   21691 main.go:141] libmachine: (ha-315064)     <boot dev='cdrom'/>
	I0318 20:46:22.444254   21691 main.go:141] libmachine: (ha-315064)     <boot dev='hd'/>
	I0318 20:46:22.444264   21691 main.go:141] libmachine: (ha-315064)     <bootmenu enable='no'/>
	I0318 20:46:22.444273   21691 main.go:141] libmachine: (ha-315064)   </os>
	I0318 20:46:22.444289   21691 main.go:141] libmachine: (ha-315064)   <devices>
	I0318 20:46:22.444306   21691 main.go:141] libmachine: (ha-315064)     <disk type='file' device='cdrom'>
	I0318 20:46:22.444331   21691 main.go:141] libmachine: (ha-315064)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/boot2docker.iso'/>
	I0318 20:46:22.444342   21691 main.go:141] libmachine: (ha-315064)       <target dev='hdc' bus='scsi'/>
	I0318 20:46:22.444357   21691 main.go:141] libmachine: (ha-315064)       <readonly/>
	I0318 20:46:22.444368   21691 main.go:141] libmachine: (ha-315064)     </disk>
	I0318 20:46:22.444386   21691 main.go:141] libmachine: (ha-315064)     <disk type='file' device='disk'>
	I0318 20:46:22.444403   21691 main.go:141] libmachine: (ha-315064)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:46:22.444416   21691 main.go:141] libmachine: (ha-315064)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/ha-315064.rawdisk'/>
	I0318 20:46:22.444424   21691 main.go:141] libmachine: (ha-315064)       <target dev='hda' bus='virtio'/>
	I0318 20:46:22.444436   21691 main.go:141] libmachine: (ha-315064)     </disk>
	I0318 20:46:22.444447   21691 main.go:141] libmachine: (ha-315064)     <interface type='network'>
	I0318 20:46:22.444460   21691 main.go:141] libmachine: (ha-315064)       <source network='mk-ha-315064'/>
	I0318 20:46:22.444471   21691 main.go:141] libmachine: (ha-315064)       <model type='virtio'/>
	I0318 20:46:22.444502   21691 main.go:141] libmachine: (ha-315064)     </interface>
	I0318 20:46:22.444523   21691 main.go:141] libmachine: (ha-315064)     <interface type='network'>
	I0318 20:46:22.444535   21691 main.go:141] libmachine: (ha-315064)       <source network='default'/>
	I0318 20:46:22.444546   21691 main.go:141] libmachine: (ha-315064)       <model type='virtio'/>
	I0318 20:46:22.444559   21691 main.go:141] libmachine: (ha-315064)     </interface>
	I0318 20:46:22.444570   21691 main.go:141] libmachine: (ha-315064)     <serial type='pty'>
	I0318 20:46:22.444584   21691 main.go:141] libmachine: (ha-315064)       <target port='0'/>
	I0318 20:46:22.444593   21691 main.go:141] libmachine: (ha-315064)     </serial>
	I0318 20:46:22.444610   21691 main.go:141] libmachine: (ha-315064)     <console type='pty'>
	I0318 20:46:22.444627   21691 main.go:141] libmachine: (ha-315064)       <target type='serial' port='0'/>
	I0318 20:46:22.444642   21691 main.go:141] libmachine: (ha-315064)     </console>
	I0318 20:46:22.444652   21691 main.go:141] libmachine: (ha-315064)     <rng model='virtio'>
	I0318 20:46:22.444662   21691 main.go:141] libmachine: (ha-315064)       <backend model='random'>/dev/random</backend>
	I0318 20:46:22.444671   21691 main.go:141] libmachine: (ha-315064)     </rng>
	I0318 20:46:22.444678   21691 main.go:141] libmachine: (ha-315064)     
	I0318 20:46:22.444691   21691 main.go:141] libmachine: (ha-315064)     
	I0318 20:46:22.444702   21691 main.go:141] libmachine: (ha-315064)   </devices>
	I0318 20:46:22.444711   21691 main.go:141] libmachine: (ha-315064) </domain>
	I0318 20:46:22.444725   21691 main.go:141] libmachine: (ha-315064) 
	I0318 20:46:22.448616   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:4b:27:78 in network default
	I0318 20:46:22.449166   21691 main.go:141] libmachine: (ha-315064) Ensuring networks are active...
	I0318 20:46:22.449188   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:22.449975   21691 main.go:141] libmachine: (ha-315064) Ensuring network default is active
	I0318 20:46:22.450274   21691 main.go:141] libmachine: (ha-315064) Ensuring network mk-ha-315064 is active
	I0318 20:46:22.450831   21691 main.go:141] libmachine: (ha-315064) Getting domain xml...
	I0318 20:46:22.451526   21691 main.go:141] libmachine: (ha-315064) Creating domain...
	I0318 20:46:23.593589   21691 main.go:141] libmachine: (ha-315064) Waiting to get IP...
	I0318 20:46:23.594447   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:23.594836   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:23.594866   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:23.594820   21714 retry.go:31] will retry after 274.347043ms: waiting for machine to come up
	I0318 20:46:23.870347   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:23.870700   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:23.870726   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:23.870671   21714 retry.go:31] will retry after 265.423423ms: waiting for machine to come up
	I0318 20:46:24.137991   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:24.138421   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:24.138448   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:24.138369   21714 retry.go:31] will retry after 324.361893ms: waiting for machine to come up
	I0318 20:46:24.463757   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:24.464171   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:24.464194   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:24.464121   21714 retry.go:31] will retry after 485.166496ms: waiting for machine to come up
	I0318 20:46:24.950536   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:24.950954   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:24.950988   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:24.950924   21714 retry.go:31] will retry after 659.735908ms: waiting for machine to come up
	I0318 20:46:25.612625   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:25.612956   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:25.613002   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:25.612927   21714 retry.go:31] will retry after 577.777037ms: waiting for machine to come up
	I0318 20:46:26.192551   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:26.193016   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:26.193054   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:26.192965   21714 retry.go:31] will retry after 916.92507ms: waiting for machine to come up
	I0318 20:46:27.111346   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:27.111701   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:27.111730   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:27.111650   21714 retry.go:31] will retry after 1.061259623s: waiting for machine to come up
	I0318 20:46:28.174803   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:28.175229   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:28.175252   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:28.175187   21714 retry.go:31] will retry after 1.287700397s: waiting for machine to come up
	I0318 20:46:29.464552   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:29.464939   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:29.464968   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:29.464879   21714 retry.go:31] will retry after 2.206310176s: waiting for machine to come up
	I0318 20:46:31.674070   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:31.674452   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:31.674482   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:31.674405   21714 retry.go:31] will retry after 2.003425876s: waiting for machine to come up
	I0318 20:46:33.678856   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:33.679288   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:33.679316   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:33.679243   21714 retry.go:31] will retry after 3.186798927s: waiting for machine to come up
	I0318 20:46:36.869459   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:36.869755   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:36.869785   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:36.869738   21714 retry.go:31] will retry after 2.922529074s: waiting for machine to come up
	I0318 20:46:39.795981   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:39.796448   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find current IP address of domain ha-315064 in network mk-ha-315064
	I0318 20:46:39.796471   21691 main.go:141] libmachine: (ha-315064) DBG | I0318 20:46:39.796409   21714 retry.go:31] will retry after 4.959899587s: waiting for machine to come up
	I0318 20:46:44.759102   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.759533   21691 main.go:141] libmachine: (ha-315064) Found IP for machine: 192.168.39.79
	I0318 20:46:44.759559   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has current primary IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.759569   21691 main.go:141] libmachine: (ha-315064) Reserving static IP address...
	I0318 20:46:44.759888   21691 main.go:141] libmachine: (ha-315064) DBG | unable to find host DHCP lease matching {name: "ha-315064", mac: "52:54:00:3e:a5:8a", ip: "192.168.39.79"} in network mk-ha-315064
	I0318 20:46:44.826952   21691 main.go:141] libmachine: (ha-315064) DBG | Getting to WaitForSSH function...
	I0318 20:46:44.826984   21691 main.go:141] libmachine: (ha-315064) Reserved static IP address: 192.168.39.79
	I0318 20:46:44.826996   21691 main.go:141] libmachine: (ha-315064) Waiting for SSH to be available...
	I0318 20:46:44.829203   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.829555   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:44.829582   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.829715   21691 main.go:141] libmachine: (ha-315064) DBG | Using SSH client type: external
	I0318 20:46:44.829751   21691 main.go:141] libmachine: (ha-315064) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa (-rw-------)
	I0318 20:46:44.829784   21691 main.go:141] libmachine: (ha-315064) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:46:44.829794   21691 main.go:141] libmachine: (ha-315064) DBG | About to run SSH command:
	I0318 20:46:44.829819   21691 main.go:141] libmachine: (ha-315064) DBG | exit 0
	I0318 20:46:44.952791   21691 main.go:141] libmachine: (ha-315064) DBG | SSH cmd err, output: <nil>: 
	I0318 20:46:44.953132   21691 main.go:141] libmachine: (ha-315064) KVM machine creation complete!
	I0318 20:46:44.953477   21691 main.go:141] libmachine: (ha-315064) Calling .GetConfigRaw
	I0318 20:46:44.953937   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:44.954148   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:44.954308   21691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:46:44.954323   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:46:44.955330   21691 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:46:44.955344   21691 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:46:44.955352   21691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:46:44.955362   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:44.957299   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.957630   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:44.957659   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:44.957763   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:44.957960   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:44.958090   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:44.958228   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:44.958377   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:44.958539   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:44.958548   21691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:46:45.060021   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:46:45.060046   21691 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:46:45.060056   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.062452   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.062754   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.062786   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.062908   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.063066   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.063194   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.063374   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.063511   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.063700   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.063710   21691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:46:45.173941   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:46:45.174035   21691 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:46:45.174052   21691 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:46:45.174064   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:45.174329   21691 buildroot.go:166] provisioning hostname "ha-315064"
	I0318 20:46:45.174358   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:45.174550   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.176920   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.177246   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.177265   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.177395   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.177559   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.177704   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.177840   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.177970   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.178138   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.178149   21691 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064 && echo "ha-315064" | sudo tee /etc/hostname
	I0318 20:46:45.296459   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064
	
	I0318 20:46:45.296495   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.299139   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.299483   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.299531   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.299693   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.299880   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.300032   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.300156   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.300381   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.300532   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.300553   21691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:46:45.417717   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:46:45.417741   21691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:46:45.417776   21691 buildroot.go:174] setting up certificates
	I0318 20:46:45.417788   21691 provision.go:84] configureAuth start
	I0318 20:46:45.417807   21691 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:46:45.418149   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:45.420583   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.420893   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.420936   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.421042   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.423034   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.423342   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.423358   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.423476   21691 provision.go:143] copyHostCerts
	I0318 20:46:45.423504   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:46:45.423542   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:46:45.423552   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:46:45.423616   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:46:45.423715   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:46:45.423736   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:46:45.423743   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:46:45.423768   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:46:45.423821   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:46:45.423836   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:46:45.423843   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:46:45.423868   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:46:45.423925   21691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064 san=[127.0.0.1 192.168.39.79 ha-315064 localhost minikube]
	I0318 20:46:45.605107   21691 provision.go:177] copyRemoteCerts
	I0318 20:46:45.605174   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:46:45.605197   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.607728   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.608000   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.608024   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.608171   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.608342   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.608472   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.608605   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:45.691465   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:46:45.691537   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 20:46:45.718048   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:46:45.718104   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:46:45.744716   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:46:45.744773   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 20:46:45.771474   21691 provision.go:87] duration metric: took 353.673873ms to configureAuth
	I0318 20:46:45.771509   21691 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:46:45.771731   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:46:45.771821   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:45.774441   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.774759   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:45.774786   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:45.774916   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:45.775052   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.775211   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:45.775344   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:45.775465   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:45.775609   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:45.775624   21691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:46:46.049303   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:46:46.049340   21691 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:46:46.049351   21691 main.go:141] libmachine: (ha-315064) Calling .GetURL
	I0318 20:46:46.050640   21691 main.go:141] libmachine: (ha-315064) DBG | Using libvirt version 6000000
	I0318 20:46:46.052726   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.053047   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.053075   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.053197   21691 main.go:141] libmachine: Docker is up and running!
	I0318 20:46:46.053210   21691 main.go:141] libmachine: Reticulating splines...
	I0318 20:46:46.053218   21691 client.go:171] duration metric: took 24.084704977s to LocalClient.Create
	I0318 20:46:46.053242   21691 start.go:167] duration metric: took 24.084766408s to libmachine.API.Create "ha-315064"
	I0318 20:46:46.053254   21691 start.go:293] postStartSetup for "ha-315064" (driver="kvm2")
	I0318 20:46:46.053267   21691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:46:46.053289   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.053490   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:46:46.053513   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.055539   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.055891   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.055917   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.056065   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.056248   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.056380   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.056534   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:46.142462   21691 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:46:46.147241   21691 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:46:46.147262   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:46:46.147313   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:46:46.147388   21691 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:46:46.147398   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:46:46.147489   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:46:46.159746   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:46:46.186951   21691 start.go:296] duration metric: took 133.684985ms for postStartSetup
	I0318 20:46:46.186996   21691 main.go:141] libmachine: (ha-315064) Calling .GetConfigRaw
	I0318 20:46:46.187537   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:46.189957   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.190310   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.190339   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.190568   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:46:46.190744   21691 start.go:128] duration metric: took 24.239116546s to createHost
	I0318 20:46:46.190766   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.193015   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.193337   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.193361   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.193499   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.193701   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.193865   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.193997   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.194144   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:46:46.194299   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:46:46.194315   21691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:46:46.297962   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794806.283404586
	
	I0318 20:46:46.297988   21691 fix.go:216] guest clock: 1710794806.283404586
	I0318 20:46:46.297997   21691 fix.go:229] Guest: 2024-03-18 20:46:46.283404586 +0000 UTC Remote: 2024-03-18 20:46:46.190756996 +0000 UTC m=+24.350032451 (delta=92.64759ms)
	I0318 20:46:46.298014   21691 fix.go:200] guest clock delta is within tolerance: 92.64759ms
	I0318 20:46:46.298020   21691 start.go:83] releasing machines lock for "ha-315064", held for 24.346466173s
	I0318 20:46:46.298035   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.298246   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:46.300628   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.300995   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.301026   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.301199   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.301635   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.301795   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:46:46.301893   21691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:46:46.301930   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.301996   21691 ssh_runner.go:195] Run: cat /version.json
	I0318 20:46:46.302026   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:46:46.304410   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.304681   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.304705   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.304727   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.304836   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.304994   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.305129   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.305166   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:46.305193   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:46.305282   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:46.305358   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:46:46.305511   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:46:46.305652   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:46:46.305823   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:46:46.389952   21691 ssh_runner.go:195] Run: systemctl --version
	I0318 20:46:46.412414   21691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:46:46.585196   21691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:46:46.591427   21691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:46:46.591487   21691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:46:46.608427   21691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:46:46.608447   21691 start.go:494] detecting cgroup driver to use...
	I0318 20:46:46.608509   21691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:46:46.626684   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:46:46.641728   21691 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:46:46.641789   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:46:46.657059   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:46:46.671879   21691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:46:46.788408   21691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:46:46.934253   21691 docker.go:233] disabling docker service ...
	I0318 20:46:46.934319   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:46:46.950155   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:46:46.964081   21691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:46:47.099604   21691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:46:47.239156   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:46:47.254226   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:46:47.274183   21691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:46:47.274236   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.286187   21691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:46:47.286230   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.298347   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.310240   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.321968   21691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:46:47.334039   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.345756   21691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.363876   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:46:47.375577   21691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:46:47.386154   21691 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:46:47.386204   21691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:46:47.401415   21691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:46:47.412549   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:46:47.551080   21691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:46:47.687299   21691 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:46:47.687377   21691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:46:47.693010   21691 start.go:562] Will wait 60s for crictl version
	I0318 20:46:47.693057   21691 ssh_runner.go:195] Run: which crictl
	I0318 20:46:47.697174   21691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:46:47.737039   21691 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:46:47.737127   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:46:47.766077   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:46:47.796915   21691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:46:47.798150   21691 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:46:47.800442   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:47.800745   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:46:47.800774   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:46:47.800978   21691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:46:47.805458   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:46:47.820533   21691 kubeadm.go:877] updating cluster {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 20:46:47.820624   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:46:47.820658   21691 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:46:47.854790   21691 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 20:46:47.854855   21691 ssh_runner.go:195] Run: which lz4
	I0318 20:46:47.859097   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 20:46:47.859192   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 20:46:47.863620   21691 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 20:46:47.863640   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 20:46:49.661674   21691 crio.go:462] duration metric: took 1.802494227s to copy over tarball
	I0318 20:46:49.661746   21691 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 20:46:52.299139   21691 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.637357249s)
	I0318 20:46:52.299169   21691 crio.go:469] duration metric: took 2.637464587s to extract the tarball
	I0318 20:46:52.299177   21691 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 20:46:52.342658   21691 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:46:52.390732   21691 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:46:52.390750   21691 cache_images.go:84] Images are preloaded, skipping loading
	I0318 20:46:52.390757   21691 kubeadm.go:928] updating node { 192.168.39.79 8443 v1.28.4 crio true true} ...
	I0318 20:46:52.390891   21691 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:46:52.390975   21691 ssh_runner.go:195] Run: crio config
	I0318 20:46:52.438327   21691 cni.go:84] Creating CNI manager for ""
	I0318 20:46:52.438349   21691 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 20:46:52.438365   21691 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 20:46:52.438389   21691 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-315064 NodeName:ha-315064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 20:46:52.438523   21691 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-315064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 20:46:52.438549   21691 kube-vip.go:111] generating kube-vip config ...
	I0318 20:46:52.438584   21691 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:46:52.458947   21691 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:46:52.459061   21691 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:46:52.459126   21691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:46:52.471105   21691 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 20:46:52.471162   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 20:46:52.482670   21691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 20:46:52.501751   21691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:46:52.520325   21691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 20:46:52.539485   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:46:52.558348   21691 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:46:52.563071   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:46:52.577845   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:46:52.702982   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:46:52.720532   21691 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.79
	I0318 20:46:52.720549   21691 certs.go:194] generating shared ca certs ...
	I0318 20:46:52.720566   21691 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.720705   21691 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:46:52.720755   21691 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:46:52.720768   21691 certs.go:256] generating profile certs ...
	I0318 20:46:52.720833   21691 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:46:52.720853   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt with IP's: []
	I0318 20:46:52.846655   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt ...
	I0318 20:46:52.846706   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt: {Name:mkfbf0e8628dd07990bd6fe2635e15f4b1d135fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.847077   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key ...
	I0318 20:46:52.847109   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key: {Name:mk029b3c519fd721ceecf06ae82b3034b3d72595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.847294   21691 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85
	I0318 20:46:52.847316   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.254]
	I0318 20:46:52.972176   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85 ...
	I0318 20:46:52.972206   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85: {Name:mk809a3d998afad1344d1912954543bd78b5687c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.972348   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85 ...
	I0318 20:46:52.972367   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85: {Name:mk2b16960466efe924cbf02c221964fe69ab0498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:52.972436   21691 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.f72f8f85 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:46:52.972520   21691 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.f72f8f85 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:46:52.972572   21691 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:46:52.972586   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt with IP's: []
	I0318 20:46:53.030704   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt ...
	I0318 20:46:53.030728   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt: {Name:mkb1c6c4fc166282744b97f277714d12fbf364d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:53.030867   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key ...
	I0318 20:46:53.030877   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key: {Name:mka950c2802fbd336e6077e24c694131bb322466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:46:53.030942   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:46:53.030957   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:46:53.030967   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:46:53.030978   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:46:53.030990   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:46:53.031000   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:46:53.031013   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:46:53.031022   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:46:53.031064   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:46:53.031096   21691 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:46:53.031105   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:46:53.031128   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:46:53.031151   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:46:53.031171   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:46:53.031205   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:46:53.031229   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.031246   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.031263   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.031808   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:46:53.060127   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:46:53.085858   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:46:53.111659   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:46:53.137876   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 20:46:53.163264   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:46:53.190323   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:46:53.216299   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:46:53.242055   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:46:53.267645   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:46:53.293200   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:46:53.319883   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 20:46:53.338442   21691 ssh_runner.go:195] Run: openssl version
	I0318 20:46:53.344723   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:46:53.357314   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.362409   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.362470   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:46:53.368960   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:46:53.381477   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:46:53.394084   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.399242   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.399296   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:46:53.405646   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:46:53.418411   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:46:53.431704   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.436893   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.436942   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:46:53.443434   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:46:53.456139   21691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:46:53.460939   21691 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:46:53.460997   21691 kubeadm.go:391] StartCluster: {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:46:53.461098   21691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 20:46:53.461158   21691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 20:46:53.506898   21691 cri.go:89] found id: ""
	I0318 20:46:53.506965   21691 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 20:46:53.518555   21691 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 20:46:53.532584   21691 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 20:46:53.556262   21691 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 20:46:53.556278   21691 kubeadm.go:156] found existing configuration files:
	
	I0318 20:46:53.556318   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 20:46:53.571812   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 20:46:53.571852   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 20:46:53.589112   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 20:46:53.599484   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 20:46:53.599540   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 20:46:53.617233   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 20:46:53.628061   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 20:46:53.628107   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 20:46:53.639263   21691 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 20:46:53.650052   21691 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 20:46:53.650129   21691 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 20:46:53.661021   21691 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 20:46:53.764528   21691 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 20:46:53.764639   21691 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 20:46:53.910044   21691 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 20:46:53.910185   21691 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 20:46:53.910334   21691 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 20:46:54.139369   21691 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 20:46:54.356012   21691 out.go:204]   - Generating certificates and keys ...
	I0318 20:46:54.356140   21691 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 20:46:54.356261   21691 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 20:46:54.369747   21691 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 20:46:54.746613   21691 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 20:46:54.864651   21691 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 20:46:55.040387   21691 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 20:46:55.150803   21691 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 20:46:55.150976   21691 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-315064 localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
	I0318 20:46:55.258885   21691 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 20:46:55.259021   21691 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-315064 localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
	I0318 20:46:55.331196   21691 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 20:46:55.461613   21691 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 20:46:55.555510   21691 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 20:46:55.555822   21691 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 20:46:55.869968   21691 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 20:46:56.095783   21691 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 20:46:56.334938   21691 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 20:46:56.413457   21691 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 20:46:56.414146   21691 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 20:46:56.417095   21691 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 20:46:56.419100   21691 out.go:204]   - Booting up control plane ...
	I0318 20:46:56.419226   21691 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 20:46:56.419319   21691 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 20:46:56.419390   21691 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 20:46:56.437328   21691 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 20:46:56.438508   21691 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 20:46:56.439122   21691 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 20:46:56.574190   21691 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 20:47:06.192480   21691 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.619608 seconds
	I0318 20:47:06.192620   21691 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 20:47:06.206609   21691 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 20:47:06.741282   21691 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 20:47:06.741487   21691 kubeadm.go:309] [mark-control-plane] Marking the node ha-315064 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 20:47:07.261400   21691 kubeadm.go:309] [bootstrap-token] Using token: 1buc55.ep1i46vz8cpac7up
	I0318 20:47:07.263020   21691 out.go:204]   - Configuring RBAC rules ...
	I0318 20:47:07.263160   21691 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 20:47:07.270124   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 20:47:07.279342   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 20:47:07.283192   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 20:47:07.288935   21691 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 20:47:07.298117   21691 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 20:47:07.311963   21691 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 20:47:07.584725   21691 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 20:47:07.677680   21691 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 20:47:07.678413   21691 kubeadm.go:309] 
	I0318 20:47:07.678517   21691 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 20:47:07.678560   21691 kubeadm.go:309] 
	I0318 20:47:07.678648   21691 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 20:47:07.678660   21691 kubeadm.go:309] 
	I0318 20:47:07.678694   21691 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 20:47:07.678751   21691 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 20:47:07.678818   21691 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 20:47:07.678827   21691 kubeadm.go:309] 
	I0318 20:47:07.678914   21691 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 20:47:07.678937   21691 kubeadm.go:309] 
	I0318 20:47:07.679017   21691 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 20:47:07.679030   21691 kubeadm.go:309] 
	I0318 20:47:07.679108   21691 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 20:47:07.679206   21691 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 20:47:07.679298   21691 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 20:47:07.679324   21691 kubeadm.go:309] 
	I0318 20:47:07.679444   21691 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 20:47:07.679545   21691 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 20:47:07.679561   21691 kubeadm.go:309] 
	I0318 20:47:07.679672   21691 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 1buc55.ep1i46vz8cpac7up \
	I0318 20:47:07.679819   21691 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 20:47:07.679868   21691 kubeadm.go:309] 	--control-plane 
	I0318 20:47:07.679881   21691 kubeadm.go:309] 
	I0318 20:47:07.680000   21691 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 20:47:07.680012   21691 kubeadm.go:309] 
	I0318 20:47:07.680110   21691 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 1buc55.ep1i46vz8cpac7up \
	I0318 20:47:07.680257   21691 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 20:47:07.680922   21691 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 20:47:07.680953   21691 cni.go:84] Creating CNI manager for ""
	I0318 20:47:07.680965   21691 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 20:47:07.682696   21691 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 20:47:07.684140   21691 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 20:47:07.693797   21691 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 20:47:07.693816   21691 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 20:47:07.734800   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 20:47:08.789983   21691 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.055150964s)
	I0318 20:47:08.790026   21691 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 20:47:08.790117   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:08.790165   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-315064 minikube.k8s.io/updated_at=2024_03_18T20_47_08_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=ha-315064 minikube.k8s.io/primary=true
	I0318 20:47:08.807213   21691 ops.go:34] apiserver oom_adj: -16
	I0318 20:47:09.014164   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:09.514197   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:10.014959   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:10.514611   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:11.015108   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:11.515168   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:12.015241   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:12.514751   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:13.014995   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:13.515000   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:14.014990   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:14.514197   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:15.014307   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:15.514814   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:16.014503   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:16.514710   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:17.014637   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:17.514453   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 20:47:17.733344   21691 kubeadm.go:1107] duration metric: took 8.943275498s to wait for elevateKubeSystemPrivileges
	W0318 20:47:17.733392   21691 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 20:47:17.733401   21691 kubeadm.go:393] duration metric: took 24.272407947s to StartCluster
	I0318 20:47:17.733421   21691 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:17.733507   21691 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:47:17.734420   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:17.734673   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 20:47:17.734693   21691 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:47:17.734725   21691 start.go:240] waiting for startup goroutines ...
	I0318 20:47:17.734737   21691 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 20:47:17.734808   21691 addons.go:69] Setting storage-provisioner=true in profile "ha-315064"
	I0318 20:47:17.734828   21691 addons.go:69] Setting default-storageclass=true in profile "ha-315064"
	I0318 20:47:17.734838   21691 addons.go:234] Setting addon storage-provisioner=true in "ha-315064"
	I0318 20:47:17.734859   21691 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-315064"
	I0318 20:47:17.734868   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:47:17.734897   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:17.735212   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.735255   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.735285   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.735314   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.750053   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0318 20:47:17.750332   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0318 20:47:17.750493   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.750741   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.750946   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.750969   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.751197   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.751220   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.751283   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.751470   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:17.751527   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.752074   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.752108   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.753544   21691 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:47:17.753766   21691 kapi.go:59] client config for ha-315064: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 20:47:17.754560   21691 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 20:47:17.754685   21691 addons.go:234] Setting addon default-storageclass=true in "ha-315064"
	I0318 20:47:17.754725   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:47:17.755080   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.755108   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.766882   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0318 20:47:17.767291   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.767788   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.767816   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.768103   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.768307   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:17.768629   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45007
	I0318 20:47:17.769078   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.769721   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.769742   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.770062   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:47:17.770234   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.771912   21691 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 20:47:17.770730   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:17.773321   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:17.773434   21691 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 20:47:17.773455   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 20:47:17.773475   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:47:17.776104   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.776481   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:47:17.776500   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.776723   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:47:17.776886   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:47:17.777054   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:47:17.777214   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:47:17.787628   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0318 20:47:17.788005   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:17.788403   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:17.788427   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:17.788766   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:17.788949   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:17.790234   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:47:17.790475   21691 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 20:47:17.790493   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 20:47:17.790510   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:47:17.792951   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.793336   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:47:17.793362   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:17.793470   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:47:17.793629   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:47:17.793778   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:47:17.793916   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:47:17.999436   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 20:47:18.009444   21691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 20:47:18.023029   21691 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 20:47:18.901432   21691 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 20:47:19.066592   21691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.057118931s)
	I0318 20:47:19.066643   21691 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043585863s)
	I0318 20:47:19.066680   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.066697   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.066649   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.066738   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.066989   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067003   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.067011   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.067018   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.067076   21691 main.go:141] libmachine: (ha-315064) DBG | Closing plugin on server side
	I0318 20:47:19.067164   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067186   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.067203   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.067214   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.067222   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067236   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.067352   21691 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 20:47:19.067366   21691 round_trippers.go:469] Request Headers:
	I0318 20:47:19.067386   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:47:19.067398   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:47:19.067459   21691 main.go:141] libmachine: (ha-315064) DBG | Closing plugin on server side
	I0318 20:47:19.067459   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.067494   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.078473   21691 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 20:47:19.078970   21691 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 20:47:19.078982   21691 round_trippers.go:469] Request Headers:
	I0318 20:47:19.078989   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:47:19.078995   21691 round_trippers.go:473]     Content-Type: application/json
	I0318 20:47:19.078999   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:47:19.081842   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:47:19.082129   21691 main.go:141] libmachine: Making call to close driver server
	I0318 20:47:19.082141   21691 main.go:141] libmachine: (ha-315064) Calling .Close
	I0318 20:47:19.082374   21691 main.go:141] libmachine: (ha-315064) DBG | Closing plugin on server side
	I0318 20:47:19.082390   21691 main.go:141] libmachine: Successfully made call to close driver server
	I0318 20:47:19.082400   21691 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 20:47:19.084895   21691 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 20:47:19.086252   21691 addons.go:505] duration metric: took 1.351513507s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 20:47:19.086285   21691 start.go:245] waiting for cluster config update ...
	I0318 20:47:19.086300   21691 start.go:254] writing updated cluster config ...
	I0318 20:47:19.087978   21691 out.go:177] 
	I0318 20:47:19.089526   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:19.089591   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:47:19.091233   21691 out.go:177] * Starting "ha-315064-m02" control-plane node in "ha-315064" cluster
	I0318 20:47:19.092566   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:47:19.092584   21691 cache.go:56] Caching tarball of preloaded images
	I0318 20:47:19.092665   21691 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:47:19.092677   21691 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:47:19.092744   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:47:19.092888   21691 start.go:360] acquireMachinesLock for ha-315064-m02: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:47:19.092955   21691 start.go:364] duration metric: took 32.077µs to acquireMachinesLock for "ha-315064-m02"
	I0318 20:47:19.092976   21691 start.go:93] Provisioning new machine with config: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:47:19.093037   21691 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0318 20:47:19.095468   21691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 20:47:19.095533   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:19.095556   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:19.110060   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0318 20:47:19.110524   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:19.110960   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:19.110981   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:19.111319   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:19.111532   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:19.111704   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:19.111876   21691 start.go:159] libmachine.API.Create for "ha-315064" (driver="kvm2")
	I0318 20:47:19.111902   21691 client.go:168] LocalClient.Create starting
	I0318 20:47:19.111936   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:47:19.111970   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:47:19.111992   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:47:19.112065   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:47:19.112096   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:47:19.112119   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:47:19.112147   21691 main.go:141] libmachine: Running pre-create checks...
	I0318 20:47:19.112158   21691 main.go:141] libmachine: (ha-315064-m02) Calling .PreCreateCheck
	I0318 20:47:19.112327   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetConfigRaw
	I0318 20:47:19.112766   21691 main.go:141] libmachine: Creating machine...
	I0318 20:47:19.112785   21691 main.go:141] libmachine: (ha-315064-m02) Calling .Create
	I0318 20:47:19.112939   21691 main.go:141] libmachine: (ha-315064-m02) Creating KVM machine...
	I0318 20:47:19.114071   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found existing default KVM network
	I0318 20:47:19.114268   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found existing private KVM network mk-ha-315064
	I0318 20:47:19.114400   21691 main.go:141] libmachine: (ha-315064-m02) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02 ...
	I0318 20:47:19.114424   21691 main.go:141] libmachine: (ha-315064-m02) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:47:19.114494   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.114393   22037 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:47:19.114571   21691 main.go:141] libmachine: (ha-315064-m02) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:47:19.336459   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.336289   22037 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa...
	I0318 20:47:19.653315   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.653206   22037 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/ha-315064-m02.rawdisk...
	I0318 20:47:19.653352   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Writing magic tar header
	I0318 20:47:19.653366   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Writing SSH key tar header
	I0318 20:47:19.653382   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:19.653307   22037 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02 ...
	I0318 20:47:19.653400   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02
	I0318 20:47:19.653420   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:47:19.653435   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:47:19.653451   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02 (perms=drwx------)
	I0318 20:47:19.653470   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:47:19.653487   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:47:19.653496   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:47:19.653504   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:47:19.653511   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:47:19.653518   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Checking permissions on dir: /home
	I0318 20:47:19.653528   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Skipping /home - not owner
	I0318 20:47:19.653543   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:47:19.653575   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:47:19.653601   21691 main.go:141] libmachine: (ha-315064-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:47:19.653615   21691 main.go:141] libmachine: (ha-315064-m02) Creating domain...
	I0318 20:47:19.654390   21691 main.go:141] libmachine: (ha-315064-m02) define libvirt domain using xml: 
	I0318 20:47:19.654412   21691 main.go:141] libmachine: (ha-315064-m02) <domain type='kvm'>
	I0318 20:47:19.654422   21691 main.go:141] libmachine: (ha-315064-m02)   <name>ha-315064-m02</name>
	I0318 20:47:19.654437   21691 main.go:141] libmachine: (ha-315064-m02)   <memory unit='MiB'>2200</memory>
	I0318 20:47:19.654451   21691 main.go:141] libmachine: (ha-315064-m02)   <vcpu>2</vcpu>
	I0318 20:47:19.654462   21691 main.go:141] libmachine: (ha-315064-m02)   <features>
	I0318 20:47:19.654473   21691 main.go:141] libmachine: (ha-315064-m02)     <acpi/>
	I0318 20:47:19.654484   21691 main.go:141] libmachine: (ha-315064-m02)     <apic/>
	I0318 20:47:19.654494   21691 main.go:141] libmachine: (ha-315064-m02)     <pae/>
	I0318 20:47:19.654505   21691 main.go:141] libmachine: (ha-315064-m02)     
	I0318 20:47:19.654534   21691 main.go:141] libmachine: (ha-315064-m02)   </features>
	I0318 20:47:19.654557   21691 main.go:141] libmachine: (ha-315064-m02)   <cpu mode='host-passthrough'>
	I0318 20:47:19.654569   21691 main.go:141] libmachine: (ha-315064-m02)   
	I0318 20:47:19.654580   21691 main.go:141] libmachine: (ha-315064-m02)   </cpu>
	I0318 20:47:19.654593   21691 main.go:141] libmachine: (ha-315064-m02)   <os>
	I0318 20:47:19.654604   21691 main.go:141] libmachine: (ha-315064-m02)     <type>hvm</type>
	I0318 20:47:19.654616   21691 main.go:141] libmachine: (ha-315064-m02)     <boot dev='cdrom'/>
	I0318 20:47:19.654627   21691 main.go:141] libmachine: (ha-315064-m02)     <boot dev='hd'/>
	I0318 20:47:19.654638   21691 main.go:141] libmachine: (ha-315064-m02)     <bootmenu enable='no'/>
	I0318 20:47:19.654651   21691 main.go:141] libmachine: (ha-315064-m02)   </os>
	I0318 20:47:19.654659   21691 main.go:141] libmachine: (ha-315064-m02)   <devices>
	I0318 20:47:19.654675   21691 main.go:141] libmachine: (ha-315064-m02)     <disk type='file' device='cdrom'>
	I0318 20:47:19.654692   21691 main.go:141] libmachine: (ha-315064-m02)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/boot2docker.iso'/>
	I0318 20:47:19.654705   21691 main.go:141] libmachine: (ha-315064-m02)       <target dev='hdc' bus='scsi'/>
	I0318 20:47:19.654724   21691 main.go:141] libmachine: (ha-315064-m02)       <readonly/>
	I0318 20:47:19.654745   21691 main.go:141] libmachine: (ha-315064-m02)     </disk>
	I0318 20:47:19.654763   21691 main.go:141] libmachine: (ha-315064-m02)     <disk type='file' device='disk'>
	I0318 20:47:19.654790   21691 main.go:141] libmachine: (ha-315064-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:47:19.654815   21691 main.go:141] libmachine: (ha-315064-m02)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/ha-315064-m02.rawdisk'/>
	I0318 20:47:19.654830   21691 main.go:141] libmachine: (ha-315064-m02)       <target dev='hda' bus='virtio'/>
	I0318 20:47:19.654838   21691 main.go:141] libmachine: (ha-315064-m02)     </disk>
	I0318 20:47:19.654851   21691 main.go:141] libmachine: (ha-315064-m02)     <interface type='network'>
	I0318 20:47:19.654859   21691 main.go:141] libmachine: (ha-315064-m02)       <source network='mk-ha-315064'/>
	I0318 20:47:19.654870   21691 main.go:141] libmachine: (ha-315064-m02)       <model type='virtio'/>
	I0318 20:47:19.654880   21691 main.go:141] libmachine: (ha-315064-m02)     </interface>
	I0318 20:47:19.654901   21691 main.go:141] libmachine: (ha-315064-m02)     <interface type='network'>
	I0318 20:47:19.654916   21691 main.go:141] libmachine: (ha-315064-m02)       <source network='default'/>
	I0318 20:47:19.654926   21691 main.go:141] libmachine: (ha-315064-m02)       <model type='virtio'/>
	I0318 20:47:19.654933   21691 main.go:141] libmachine: (ha-315064-m02)     </interface>
	I0318 20:47:19.654945   21691 main.go:141] libmachine: (ha-315064-m02)     <serial type='pty'>
	I0318 20:47:19.654955   21691 main.go:141] libmachine: (ha-315064-m02)       <target port='0'/>
	I0318 20:47:19.654964   21691 main.go:141] libmachine: (ha-315064-m02)     </serial>
	I0318 20:47:19.654975   21691 main.go:141] libmachine: (ha-315064-m02)     <console type='pty'>
	I0318 20:47:19.655000   21691 main.go:141] libmachine: (ha-315064-m02)       <target type='serial' port='0'/>
	I0318 20:47:19.655017   21691 main.go:141] libmachine: (ha-315064-m02)     </console>
	I0318 20:47:19.655032   21691 main.go:141] libmachine: (ha-315064-m02)     <rng model='virtio'>
	I0318 20:47:19.655049   21691 main.go:141] libmachine: (ha-315064-m02)       <backend model='random'>/dev/random</backend>
	I0318 20:47:19.655058   21691 main.go:141] libmachine: (ha-315064-m02)     </rng>
	I0318 20:47:19.655065   21691 main.go:141] libmachine: (ha-315064-m02)     
	I0318 20:47:19.655077   21691 main.go:141] libmachine: (ha-315064-m02)     
	I0318 20:47:19.655088   21691 main.go:141] libmachine: (ha-315064-m02)   </devices>
	I0318 20:47:19.655099   21691 main.go:141] libmachine: (ha-315064-m02) </domain>
	I0318 20:47:19.655110   21691 main.go:141] libmachine: (ha-315064-m02) 
	I0318 20:47:19.661541   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:cd:c3:4e in network default
	I0318 20:47:19.662052   21691 main.go:141] libmachine: (ha-315064-m02) Ensuring networks are active...
	I0318 20:47:19.662074   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:19.662747   21691 main.go:141] libmachine: (ha-315064-m02) Ensuring network default is active
	I0318 20:47:19.663055   21691 main.go:141] libmachine: (ha-315064-m02) Ensuring network mk-ha-315064 is active
	I0318 20:47:19.663352   21691 main.go:141] libmachine: (ha-315064-m02) Getting domain xml...
	I0318 20:47:19.664011   21691 main.go:141] libmachine: (ha-315064-m02) Creating domain...
	I0318 20:47:20.891077   21691 main.go:141] libmachine: (ha-315064-m02) Waiting to get IP...
	I0318 20:47:20.892035   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:20.892420   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:20.892446   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:20.892401   22037 retry.go:31] will retry after 307.508626ms: waiting for machine to come up
	I0318 20:47:21.202013   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:21.202516   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:21.202546   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:21.202476   22037 retry.go:31] will retry after 367.474223ms: waiting for machine to come up
	I0318 20:47:21.571970   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:21.572427   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:21.572455   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:21.572380   22037 retry.go:31] will retry after 408.132027ms: waiting for machine to come up
	I0318 20:47:21.982468   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:21.983022   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:21.983053   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:21.982974   22037 retry.go:31] will retry after 501.335195ms: waiting for machine to come up
	I0318 20:47:22.485585   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:22.486050   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:22.486094   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:22.486020   22037 retry.go:31] will retry after 734.489713ms: waiting for machine to come up
	I0318 20:47:23.221785   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:23.222239   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:23.222266   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:23.222205   22037 retry.go:31] will retry after 853.9073ms: waiting for machine to come up
	I0318 20:47:24.077586   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:24.078058   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:24.078091   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:24.078010   22037 retry.go:31] will retry after 1.158273772s: waiting for machine to come up
	I0318 20:47:25.237375   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:25.237816   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:25.237840   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:25.237789   22037 retry.go:31] will retry after 1.20695979s: waiting for machine to come up
	I0318 20:47:26.446084   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:26.446524   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:26.446552   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:26.446488   22037 retry.go:31] will retry after 1.582418917s: waiting for machine to come up
	I0318 20:47:28.029813   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:28.030202   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:28.030232   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:28.030156   22037 retry.go:31] will retry after 1.8376141s: waiting for machine to come up
	I0318 20:47:29.869029   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:29.869479   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:29.869502   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:29.869440   22037 retry.go:31] will retry after 2.868778614s: waiting for machine to come up
	I0318 20:47:32.739287   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:32.739682   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:32.739703   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:32.739652   22037 retry.go:31] will retry after 2.654134326s: waiting for machine to come up
	I0318 20:47:35.395406   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:35.395790   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:35.395811   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:35.395760   22037 retry.go:31] will retry after 3.820856712s: waiting for machine to come up
	I0318 20:47:39.217916   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:39.218310   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find current IP address of domain ha-315064-m02 in network mk-ha-315064
	I0318 20:47:39.218347   21691 main.go:141] libmachine: (ha-315064-m02) DBG | I0318 20:47:39.218279   22037 retry.go:31] will retry after 5.323823478s: waiting for machine to come up
	I0318 20:47:44.543655   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.544031   21691 main.go:141] libmachine: (ha-315064-m02) Found IP for machine: 192.168.39.231
	I0318 20:47:44.544050   21691 main.go:141] libmachine: (ha-315064-m02) Reserving static IP address...
	I0318 20:47:44.544063   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has current primary IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.544406   21691 main.go:141] libmachine: (ha-315064-m02) DBG | unable to find host DHCP lease matching {name: "ha-315064-m02", mac: "52:54:00:83:47:db", ip: "192.168.39.231"} in network mk-ha-315064
	I0318 20:47:44.613338   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Getting to WaitForSSH function...
	I0318 20:47:44.613373   21691 main.go:141] libmachine: (ha-315064-m02) Reserved static IP address: 192.168.39.231
	I0318 20:47:44.613385   21691 main.go:141] libmachine: (ha-315064-m02) Waiting for SSH to be available...
	I0318 20:47:44.615919   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.616386   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.616415   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.616527   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Using SSH client type: external
	I0318 20:47:44.616554   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa (-rw-------)
	I0318 20:47:44.616596   21691 main.go:141] libmachine: (ha-315064-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:47:44.616614   21691 main.go:141] libmachine: (ha-315064-m02) DBG | About to run SSH command:
	I0318 20:47:44.616628   21691 main.go:141] libmachine: (ha-315064-m02) DBG | exit 0
	I0318 20:47:44.745056   21691 main.go:141] libmachine: (ha-315064-m02) DBG | SSH cmd err, output: <nil>: 
	I0318 20:47:44.745296   21691 main.go:141] libmachine: (ha-315064-m02) KVM machine creation complete!
	I0318 20:47:44.745623   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetConfigRaw
	I0318 20:47:44.746158   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:44.746333   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:44.746518   21691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:47:44.746533   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 20:47:44.747653   21691 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:47:44.747665   21691 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:47:44.747671   21691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:47:44.747679   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:44.749757   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.750127   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.750159   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.750256   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:44.750423   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.750581   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.750739   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:44.750903   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:44.751102   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:44.751116   21691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:47:44.860877   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:47:44.860926   21691 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:47:44.860936   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:44.863953   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.864310   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.864335   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.864523   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:44.864723   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.864866   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.865002   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:44.865150   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:44.865318   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:44.865331   21691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:47:44.977946   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:47:44.978024   21691 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:47:44.978031   21691 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:47:44.978040   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:44.978291   21691 buildroot.go:166] provisioning hostname "ha-315064-m02"
	I0318 20:47:44.978319   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:44.978522   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:44.981043   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.981416   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:44.981444   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:44.981595   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:44.981743   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.981913   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:44.982030   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:44.982172   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:44.982319   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:44.982331   21691 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064-m02 && echo "ha-315064-m02" | sudo tee /etc/hostname
	I0318 20:47:45.108741   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064-m02
	
	I0318 20:47:45.108763   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.111288   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.111666   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.111697   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.111855   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:45.112063   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.112249   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.112398   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:45.112558   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:45.112720   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:45.112737   21691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:47:45.230251   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:47:45.230275   21691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:47:45.230289   21691 buildroot.go:174] setting up certificates
	I0318 20:47:45.230299   21691 provision.go:84] configureAuth start
	I0318 20:47:45.230309   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetMachineName
	I0318 20:47:45.230547   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:45.233273   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.233648   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.233683   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.233859   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.235996   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.236306   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.236329   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.236459   21691 provision.go:143] copyHostCerts
	I0318 20:47:45.236484   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:47:45.236510   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:47:45.236518   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:47:45.236583   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:47:45.236677   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:47:45.236702   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:47:45.236711   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:47:45.236738   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:47:45.236801   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:47:45.236817   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:47:45.236823   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:47:45.236847   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:47:45.236918   21691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064-m02 san=[127.0.0.1 192.168.39.231 ha-315064-m02 localhost minikube]
	I0318 20:47:45.546247   21691 provision.go:177] copyRemoteCerts
	I0318 20:47:45.546410   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:47:45.546470   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.549477   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.549818   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.549849   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.550188   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:45.550376   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.550562   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:45.550718   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:45.638487   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:47:45.638568   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:47:45.666316   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:47:45.666385   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 20:47:45.692354   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:47:45.692430   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 20:47:45.717316   21691 provision.go:87] duration metric: took 487.007623ms to configureAuth
	I0318 20:47:45.717336   21691 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:47:45.717496   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:45.717563   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:45.720132   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.720503   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:45.720533   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:45.720732   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:45.720947   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.721128   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:45.721279   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:45.721420   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:45.721617   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:45.721632   21691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:47:46.004191   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:47:46.004231   21691 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:47:46.004243   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetURL
	I0318 20:47:46.005539   21691 main.go:141] libmachine: (ha-315064-m02) DBG | Using libvirt version 6000000
	I0318 20:47:46.007767   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.008106   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.008135   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.008303   21691 main.go:141] libmachine: Docker is up and running!
	I0318 20:47:46.008322   21691 main.go:141] libmachine: Reticulating splines...
	I0318 20:47:46.008328   21691 client.go:171] duration metric: took 26.89641561s to LocalClient.Create
	I0318 20:47:46.008349   21691 start.go:167] duration metric: took 26.896473285s to libmachine.API.Create "ha-315064"
	I0318 20:47:46.008363   21691 start.go:293] postStartSetup for "ha-315064-m02" (driver="kvm2")
	I0318 20:47:46.008375   21691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:47:46.008398   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.008623   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:47:46.008648   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:46.010796   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.011124   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.011159   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.011253   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.011449   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.011607   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.011743   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:46.097024   21691 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:47:46.101794   21691 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:47:46.101813   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:47:46.101878   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:47:46.101968   21691 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:47:46.101979   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:47:46.102081   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:47:46.112735   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:47:46.138673   21691 start.go:296] duration metric: took 130.296968ms for postStartSetup
	I0318 20:47:46.138723   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetConfigRaw
	I0318 20:47:46.139238   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:46.141699   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.142076   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.142114   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.142341   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:47:46.142548   21691 start.go:128] duration metric: took 27.049500671s to createHost
	I0318 20:47:46.142569   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:46.144585   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.144949   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.144972   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.145108   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.145297   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.145460   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.145590   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.145732   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:47:46.145930   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0318 20:47:46.145941   21691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:47:46.253936   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710794866.241426499
	
	I0318 20:47:46.253957   21691 fix.go:216] guest clock: 1710794866.241426499
	I0318 20:47:46.253964   21691 fix.go:229] Guest: 2024-03-18 20:47:46.241426499 +0000 UTC Remote: 2024-03-18 20:47:46.142559775 +0000 UTC m=+84.301835232 (delta=98.866724ms)
	I0318 20:47:46.253987   21691 fix.go:200] guest clock delta is within tolerance: 98.866724ms
	I0318 20:47:46.253997   21691 start.go:83] releasing machines lock for "ha-315064-m02", held for 27.161027842s
	I0318 20:47:46.254020   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.254252   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:46.256496   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.256789   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.256824   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.259159   21691 out.go:177] * Found network options:
	I0318 20:47:46.260551   21691 out.go:177]   - NO_PROXY=192.168.39.79
	W0318 20:47:46.262123   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:47:46.262157   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.262596   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.262749   21691 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 20:47:46.262817   21691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:47:46.262853   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	W0318 20:47:46.262936   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:47:46.263005   21691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:47:46.263026   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 20:47:46.265347   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.265540   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.265744   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.265768   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.265951   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.266091   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:46.266124   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.266123   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:46.266243   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 20:47:46.266302   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.266370   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 20:47:46.266421   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:46.266493   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 20:47:46.266580   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 20:47:46.507378   21691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:47:46.514755   21691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:47:46.514815   21691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:47:46.533072   21691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:47:46.533092   21691 start.go:494] detecting cgroup driver to use...
	I0318 20:47:46.533166   21691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:47:46.550301   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:47:46.564908   21691 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:47:46.564957   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:47:46.579343   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:47:46.594924   21691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:47:46.715361   21691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:47:46.902204   21691 docker.go:233] disabling docker service ...
	I0318 20:47:46.902349   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:47:46.917107   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:47:46.930537   21691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:47:47.042445   21691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:47:47.159759   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:47:47.175718   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:47:47.199673   21691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:47:47.199736   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.211567   21691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:47:47.211625   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.223499   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.234944   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.246410   21691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:47:47.258606   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.270498   21691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.289264   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:47:47.300736   21691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:47:47.311171   21691 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:47:47.311209   21691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:47:47.326024   21691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:47:47.336401   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:47:47.473807   21691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:47:47.649394   21691 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:47:47.649464   21691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:47:47.655364   21691 start.go:562] Will wait 60s for crictl version
	I0318 20:47:47.655423   21691 ssh_runner.go:195] Run: which crictl
	I0318 20:47:47.659847   21691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:47:47.700625   21691 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:47:47.700697   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:47:47.733291   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:47:47.768461   21691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:47:47.769812   21691 out.go:177]   - env NO_PROXY=192.168.39.79
	I0318 20:47:47.771081   21691 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 20:47:47.773323   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:47.773708   21691 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:47:35 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 20:47:47.773742   21691 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 20:47:47.773847   21691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:47:47.778823   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:47:47.793651   21691 mustload.go:65] Loading cluster: ha-315064
	I0318 20:47:47.793850   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:47:47.794090   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:47.794122   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:47.809130   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0318 20:47:47.809566   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:47.810031   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:47.810057   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:47.810341   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:47.810518   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:47:47.811833   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:47:47.812102   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:47:47.812124   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:47:47.826297   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0318 20:47:47.826621   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:47:47.827036   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:47:47.827058   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:47:47.827362   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:47:47.827530   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:47:47.827691   21691 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.231
	I0318 20:47:47.827704   21691 certs.go:194] generating shared ca certs ...
	I0318 20:47:47.827721   21691 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:47.827844   21691 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:47:47.827900   21691 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:47:47.827912   21691 certs.go:256] generating profile certs ...
	I0318 20:47:47.827991   21691 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:47:47.828015   21691 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493
	I0318 20:47:47.828028   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.231 192.168.39.254]
	I0318 20:47:48.093452   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493 ...
	I0318 20:47:48.093479   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493: {Name:mkd20d01fcb744945a4bb06b57a33915b0e35c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:48.093631   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493 ...
	I0318 20:47:48.093644   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493: {Name:mkd245430ef1aa369b0c6240cb5397c4595ada4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:47:48.093717   21691 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.2e13e493 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:47:48.093833   21691 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.2e13e493 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:47:48.093956   21691 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:47:48.093971   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:47:48.093982   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:47:48.093992   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:47:48.094005   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:47:48.094015   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:47:48.094027   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:47:48.094036   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:47:48.094053   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:47:48.094096   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:47:48.094127   21691 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:47:48.094135   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:47:48.094159   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:47:48.094179   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:47:48.094202   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:47:48.094239   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:47:48.094265   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.094277   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.094289   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.094319   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:47:48.097009   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:48.097391   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:47:48.097424   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:47:48.097554   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:47:48.097748   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:47:48.097881   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:47:48.098030   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:47:48.173144   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 20:47:48.180359   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 20:47:48.192761   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 20:47:48.198040   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 20:47:48.209648   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 20:47:48.214575   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 20:47:48.225444   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 20:47:48.230246   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 20:47:48.240368   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 20:47:48.246684   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 20:47:48.257856   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 20:47:48.262290   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 20:47:48.272552   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:47:48.301497   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:47:48.330774   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:47:48.356404   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:47:48.382230   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 20:47:48.411002   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:47:48.437284   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:47:48.464576   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:47:48.490586   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:47:48.515964   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:47:48.542875   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:47:48.569140   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 20:47:48.586895   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 20:47:48.605599   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 20:47:48.623898   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 20:47:48.641536   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 20:47:48.659192   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 20:47:48.678693   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 20:47:48.696310   21691 ssh_runner.go:195] Run: openssl version
	I0318 20:47:48.702361   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:47:48.713710   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.718609   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.718649   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:47:48.725676   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:47:48.736977   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:47:48.748349   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.753034   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.753072   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:47:48.759043   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:47:48.770340   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:47:48.781640   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.786579   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.786635   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:47:48.793043   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:47:48.804637   21691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:47:48.809529   21691 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:47:48.809582   21691 kubeadm.go:928] updating node {m02 192.168.39.231 8443 v1.28.4 crio true true} ...
	I0318 20:47:48.809676   21691 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:47:48.809710   21691 kube-vip.go:111] generating kube-vip config ...
	I0318 20:47:48.809746   21691 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:47:48.829499   21691 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:47:48.829572   21691 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:47:48.829631   21691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:47:48.840855   21691 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 20:47:48.840895   21691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 20:47:48.851675   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 20:47:48.851708   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:47:48.851764   21691 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0318 20:47:48.851802   21691 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0318 20:47:48.851777   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:47:48.858044   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 20:47:48.858081   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 20:48:26.937437   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:48:26.937537   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:48:26.944042   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 20:48:26.944078   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 20:49:09.657049   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:49:09.677449   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:49:09.677555   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:49:09.682608   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 20:49:09.682638   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 20:49:10.166149   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 20:49:10.177125   21691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 20:49:10.195166   21691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:49:10.212663   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:49:10.230083   21691 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:49:10.234495   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:49:10.248051   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:49:10.370218   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:49:10.387816   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:49:10.388256   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:49:10.388310   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:49:10.402882   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
	I0318 20:49:10.403348   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:49:10.403877   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:49:10.403899   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:49:10.404229   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:49:10.404433   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:49:10.404613   21691 start.go:316] joinCluster: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:49:10.404706   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 20:49:10.404722   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:49:10.407626   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:49:10.408109   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:49:10.408138   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:49:10.408289   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:49:10.408475   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:49:10.408653   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:49:10.408803   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:49:10.581970   21691 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:49:10.582014   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dx7kgq.irksjynle7vx4zyx --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m02 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443"
	I0318 20:49:51.873194   21691 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dx7kgq.irksjynle7vx4zyx --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m02 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443": (41.29114975s)
	I0318 20:49:51.873233   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 20:49:52.229164   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-315064-m02 minikube.k8s.io/updated_at=2024_03_18T20_49_52_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=ha-315064 minikube.k8s.io/primary=false
	I0318 20:49:52.365421   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-315064-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 20:49:52.491390   21691 start.go:318] duration metric: took 42.086770613s to joinCluster
	I0318 20:49:52.491470   21691 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:49:52.493205   21691 out.go:177] * Verifying Kubernetes components...
	I0318 20:49:52.491752   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:49:52.494673   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:49:52.691301   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:49:52.729729   21691 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:49:52.730098   21691 kapi.go:59] client config for ha-315064: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 20:49:52.730181   21691 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.79:8443
	I0318 20:49:52.730452   21691 node_ready.go:35] waiting up to 6m0s for node "ha-315064-m02" to be "Ready" ...
	I0318 20:49:52.730554   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:52.730603   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:52.730618   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:52.730624   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:52.746109   21691 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 20:49:53.231159   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:53.231181   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:53.231189   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:53.231193   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:53.235137   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:53.731698   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:53.731719   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:53.731727   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:53.731732   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:53.735689   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:54.231340   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:54.231359   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:54.231367   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:54.231370   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:54.235405   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:54.731346   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:54.731371   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:54.731382   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:54.731388   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:54.734547   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:54.735056   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:49:55.230973   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:55.230994   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:55.231004   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:55.231010   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:55.235442   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:55.730621   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:55.730640   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:55.730648   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:55.730651   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:55.734572   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:56.230767   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:56.230793   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:56.230802   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:56.230807   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:56.235102   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:56.730967   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:56.730988   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:56.731002   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:56.731007   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:56.734890   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:56.735745   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:49:57.230817   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:57.230837   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:57.230848   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:57.230854   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:57.234764   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:57.731135   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:57.731160   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:57.731173   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:57.731181   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:57.736522   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:49:58.230674   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:58.230705   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:58.230717   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:58.230725   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:58.234715   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:58.731046   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:58.731066   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:58.731073   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:58.731077   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:58.735536   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:49:58.736325   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:49:59.230963   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:59.230992   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:59.231000   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:59.231004   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:59.235006   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:49:59.731345   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:49:59.731366   21691 round_trippers.go:469] Request Headers:
	I0318 20:49:59.731373   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:49:59.731377   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:49:59.735085   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:00.231134   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:00.231152   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:00.231159   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:00.231161   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:00.236575   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:00.731654   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:00.731677   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:00.731688   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:00.731695   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:00.735275   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:01.231376   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:01.231402   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.231412   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.231418   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.235594   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:01.236108   21691 node_ready.go:53] node "ha-315064-m02" has status "Ready":"False"
	I0318 20:50:01.731200   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:01.731227   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.731237   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.731242   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.744476   21691 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0318 20:50:01.745058   21691 node_ready.go:49] node "ha-315064-m02" has status "Ready":"True"
	I0318 20:50:01.745086   21691 node_ready.go:38] duration metric: took 9.014593914s for node "ha-315064-m02" to be "Ready" ...
	I0318 20:50:01.745099   21691 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:50:01.745200   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:01.745211   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.745221   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.745232   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.750396   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:01.757942   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.758011   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgqzg
	I0318 20:50:01.758016   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.758024   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.758027   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.763286   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:01.764063   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:01.764081   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.764090   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.764097   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.766807   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.767298   21691 pod_ready.go:92] pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.767314   21691 pod_ready.go:81] duration metric: took 9.349024ms for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.767324   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.767365   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hrrzn
	I0318 20:50:01.767373   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.767379   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.767383   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.770042   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.770568   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:01.770581   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.770587   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.770591   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.772890   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.773445   21691 pod_ready.go:92] pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.773463   21691 pod_ready.go:81] duration metric: took 6.1332ms for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.773471   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.773515   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064
	I0318 20:50:01.773523   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.773530   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.773533   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.775945   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.776625   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:01.776638   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.776645   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.776647   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.778941   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.779611   21691 pod_ready.go:92] pod "etcd-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.779628   21691 pod_ready.go:81] duration metric: took 6.149827ms for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.779638   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.779692   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m02
	I0318 20:50:01.779702   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.779711   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.779720   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.782365   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.783043   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:01.783058   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.783065   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.783071   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.785832   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:01.786416   21691 pod_ready.go:92] pod "etcd-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:01.786441   21691 pod_ready.go:81] duration metric: took 6.793477ms for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.786458   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:01.931713   21691 request.go:629] Waited for 145.197061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064
	I0318 20:50:01.931779   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064
	I0318 20:50:01.931786   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:01.931793   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:01.931799   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:01.935672   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.132038   21691 request.go:629] Waited for 195.406119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.132095   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.132102   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.132109   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.132113   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.135819   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.136326   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:02.136347   21691 pod_ready.go:81] duration metric: took 349.8771ms for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.136359   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.331412   21691 request.go:629] Waited for 194.985949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m02
	I0318 20:50:02.331478   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m02
	I0318 20:50:02.331484   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.331497   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.331504   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.335255   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.531392   21691 request.go:629] Waited for 195.271299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:02.531462   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:02.531467   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.531474   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.531481   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.535658   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:02.536145   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:02.536162   21691 pod_ready.go:81] duration metric: took 399.795443ms for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.536172   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.731190   21691 request.go:629] Waited for 194.958242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:50:02.731271   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:50:02.731281   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.731289   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.731293   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.735169   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.931723   21691 request.go:629] Waited for 195.72943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.931779   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:02.931784   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:02.931791   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:02.931794   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:02.935677   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:02.936292   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:02.936310   21691 pod_ready.go:81] duration metric: took 400.128828ms for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:02.936322   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.131283   21691 request.go:629] Waited for 194.898302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:50:03.131363   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:50:03.131380   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.131388   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.131391   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.134918   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:03.332221   21691 request.go:629] Waited for 196.378538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.332304   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.332316   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.332327   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.332338   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.336513   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:03.337083   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:03.337106   21691 pod_ready.go:81] duration metric: took 400.77159ms for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.337128   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.532242   21691 request.go:629] Waited for 195.052953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:50:03.532330   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:50:03.532345   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.532359   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.532369   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.538777   21691 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 20:50:03.731734   21691 request.go:629] Waited for 192.381403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.731799   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:03.731806   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.731817   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.731823   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.737638   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:03.738285   21691 pod_ready.go:92] pod "kube-proxy-bccjj" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:03.738309   21691 pod_ready.go:81] duration metric: took 401.167668ms for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.738325   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:03.931424   21691 request.go:629] Waited for 193.02369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:50:03.931472   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:50:03.931478   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:03.931486   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:03.931498   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:03.936430   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.132060   21691 request.go:629] Waited for 194.396507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.132115   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.132120   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.132127   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.132132   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.136617   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.137250   21691 pod_ready.go:92] pod "kube-proxy-wrm24" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:04.137268   21691 pod_ready.go:81] duration metric: took 398.935303ms for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.137277   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.331725   21691 request.go:629] Waited for 194.337813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:50:04.331781   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:50:04.331789   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.331797   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.331801   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.336450   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.531594   21691 request.go:629] Waited for 193.365956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.531645   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:50:04.531651   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.531661   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.531667   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.535123   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:50:04.535892   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:04.535910   21691 pod_ready.go:81] duration metric: took 398.625255ms for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.535919   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.732072   21691 request.go:629] Waited for 196.087759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:50:04.732130   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:50:04.732135   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.732143   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.732148   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.736272   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.931908   21691 request.go:629] Waited for 194.34409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:04.931961   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:50:04.931966   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.931973   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.931986   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.936740   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:04.937266   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:50:04.937283   21691 pod_ready.go:81] duration metric: took 401.357763ms for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:50:04.937297   21691 pod_ready.go:38] duration metric: took 3.192182419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:50:04.937318   21691 api_server.go:52] waiting for apiserver process to appear ...
	I0318 20:50:04.937380   21691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:50:04.956032   21691 api_server.go:72] duration metric: took 12.464523612s to wait for apiserver process to appear ...
	I0318 20:50:04.956072   21691 api_server.go:88] waiting for apiserver healthz status ...
	I0318 20:50:04.956096   21691 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I0318 20:50:04.964543   21691 api_server.go:279] https://192.168.39.79:8443/healthz returned 200:
	ok
	I0318 20:50:04.964610   21691 round_trippers.go:463] GET https://192.168.39.79:8443/version
	I0318 20:50:04.964622   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:04.964630   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:04.964636   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:04.967011   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:50:04.967419   21691 api_server.go:141] control plane version: v1.28.4
	I0318 20:50:04.967438   21691 api_server.go:131] duration metric: took 11.358845ms to wait for apiserver health ...
	I0318 20:50:04.967447   21691 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 20:50:05.131838   21691 request.go:629] Waited for 164.328956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.131891   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.131906   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.131934   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.131945   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.137393   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:50:05.142231   21691 system_pods.go:59] 17 kube-system pods found
	I0318 20:50:05.142256   21691 system_pods.go:61] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:50:05.142261   21691 system_pods.go:61] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:50:05.142265   21691 system_pods.go:61] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:50:05.142268   21691 system_pods.go:61] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:50:05.142271   21691 system_pods.go:61] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:50:05.142274   21691 system_pods.go:61] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:50:05.142277   21691 system_pods.go:61] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:50:05.142282   21691 system_pods.go:61] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:50:05.142287   21691 system_pods.go:61] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:50:05.142294   21691 system_pods.go:61] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:50:05.142304   21691 system_pods.go:61] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:50:05.142309   21691 system_pods.go:61] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:50:05.142321   21691 system_pods.go:61] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:50:05.142326   21691 system_pods.go:61] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:50:05.142330   21691 system_pods.go:61] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:50:05.142334   21691 system_pods.go:61] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:50:05.142337   21691 system_pods.go:61] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:50:05.142346   21691 system_pods.go:74] duration metric: took 174.892878ms to wait for pod list to return data ...
	I0318 20:50:05.142356   21691 default_sa.go:34] waiting for default service account to be created ...
	I0318 20:50:05.331790   21691 request.go:629] Waited for 189.358753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:50:05.331878   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:50:05.331885   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.331892   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.331895   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.335930   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:05.336138   21691 default_sa.go:45] found service account: "default"
	I0318 20:50:05.336154   21691 default_sa.go:55] duration metric: took 193.78625ms for default service account to be created ...
	I0318 20:50:05.336164   21691 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 20:50:05.531741   21691 request.go:629] Waited for 195.502632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.531801   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:50:05.531807   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.531815   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.531821   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.538222   21691 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 20:50:05.543102   21691 system_pods.go:86] 17 kube-system pods found
	I0318 20:50:05.543123   21691 system_pods.go:89] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:50:05.543129   21691 system_pods.go:89] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:50:05.543133   21691 system_pods.go:89] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:50:05.543136   21691 system_pods.go:89] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:50:05.543140   21691 system_pods.go:89] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:50:05.543145   21691 system_pods.go:89] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:50:05.543152   21691 system_pods.go:89] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:50:05.543158   21691 system_pods.go:89] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:50:05.543168   21691 system_pods.go:89] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:50:05.543175   21691 system_pods.go:89] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:50:05.543186   21691 system_pods.go:89] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:50:05.543194   21691 system_pods.go:89] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:50:05.543202   21691 system_pods.go:89] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:50:05.543206   21691 system_pods.go:89] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:50:05.543210   21691 system_pods.go:89] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:50:05.543214   21691 system_pods.go:89] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:50:05.543217   21691 system_pods.go:89] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:50:05.543224   21691 system_pods.go:126] duration metric: took 207.051256ms to wait for k8s-apps to be running ...
	I0318 20:50:05.543232   21691 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 20:50:05.543284   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:50:05.560465   21691 system_svc.go:56] duration metric: took 17.227626ms WaitForService to wait for kubelet
	I0318 20:50:05.560487   21691 kubeadm.go:576] duration metric: took 13.06898296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:50:05.560507   21691 node_conditions.go:102] verifying NodePressure condition ...
	I0318 20:50:05.731890   21691 request.go:629] Waited for 171.304468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes
	I0318 20:50:05.731951   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes
	I0318 20:50:05.731961   21691 round_trippers.go:469] Request Headers:
	I0318 20:50:05.731972   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:50:05.731980   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:50:05.736397   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:50:05.737322   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:50:05.737345   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:50:05.737358   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:50:05.737364   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:50:05.737369   21691 node_conditions.go:105] duration metric: took 176.857341ms to run NodePressure ...
	I0318 20:50:05.737386   21691 start.go:240] waiting for startup goroutines ...
	I0318 20:50:05.737415   21691 start.go:254] writing updated cluster config ...
	I0318 20:50:05.739691   21691 out.go:177] 
	I0318 20:50:05.741235   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:50:05.741336   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:50:05.743192   21691 out.go:177] * Starting "ha-315064-m03" control-plane node in "ha-315064" cluster
	I0318 20:50:05.744626   21691 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:50:05.744644   21691 cache.go:56] Caching tarball of preloaded images
	I0318 20:50:05.744740   21691 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:50:05.744753   21691 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:50:05.744842   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:50:05.745066   21691 start.go:360] acquireMachinesLock for ha-315064-m03: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:50:05.745115   21691 start.go:364] duration metric: took 29.071µs to acquireMachinesLock for "ha-315064-m03"
	I0318 20:50:05.745138   21691 start.go:93] Provisioning new machine with config: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:50:05.745265   21691 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0318 20:50:05.746913   21691 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 20:50:05.747001   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:05.747031   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:05.761463   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0318 20:50:05.761896   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:05.762329   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:05.762347   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:05.762700   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:05.762898   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:05.763062   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:05.763220   21691 start.go:159] libmachine.API.Create for "ha-315064" (driver="kvm2")
	I0318 20:50:05.763248   21691 client.go:168] LocalClient.Create starting
	I0318 20:50:05.763280   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 20:50:05.763313   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:50:05.763332   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:50:05.763396   21691 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 20:50:05.763419   21691 main.go:141] libmachine: Decoding PEM data...
	I0318 20:50:05.763434   21691 main.go:141] libmachine: Parsing certificate...
	I0318 20:50:05.763463   21691 main.go:141] libmachine: Running pre-create checks...
	I0318 20:50:05.763475   21691 main.go:141] libmachine: (ha-315064-m03) Calling .PreCreateCheck
	I0318 20:50:05.763645   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetConfigRaw
	I0318 20:50:05.763974   21691 main.go:141] libmachine: Creating machine...
	I0318 20:50:05.763986   21691 main.go:141] libmachine: (ha-315064-m03) Calling .Create
	I0318 20:50:05.764104   21691 main.go:141] libmachine: (ha-315064-m03) Creating KVM machine...
	I0318 20:50:05.765309   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found existing default KVM network
	I0318 20:50:05.765418   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found existing private KVM network mk-ha-315064
	I0318 20:50:05.765530   21691 main.go:141] libmachine: (ha-315064-m03) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03 ...
	I0318 20:50:05.765564   21691 main.go:141] libmachine: (ha-315064-m03) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:50:05.765607   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:05.765522   22592 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:50:05.765705   21691 main.go:141] libmachine: (ha-315064-m03) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 20:50:05.983831   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:05.983708   22592 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa...
	I0318 20:50:06.362974   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:06.362874   22592 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/ha-315064-m03.rawdisk...
	I0318 20:50:06.363006   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Writing magic tar header
	I0318 20:50:06.363185   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Writing SSH key tar header
	I0318 20:50:06.363638   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:06.363578   22592 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03 ...
	I0318 20:50:06.363766   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03
	I0318 20:50:06.363791   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03 (perms=drwx------)
	I0318 20:50:06.363811   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 20:50:06.363825   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 20:50:06.363839   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:50:06.363852   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 20:50:06.363867   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 20:50:06.363884   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home/jenkins
	I0318 20:50:06.363899   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 20:50:06.363914   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 20:50:06.363923   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 20:50:06.363934   21691 main.go:141] libmachine: (ha-315064-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 20:50:06.363944   21691 main.go:141] libmachine: (ha-315064-m03) Creating domain...
	I0318 20:50:06.363967   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Checking permissions on dir: /home
	I0318 20:50:06.363979   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Skipping /home - not owner
	I0318 20:50:06.365029   21691 main.go:141] libmachine: (ha-315064-m03) define libvirt domain using xml: 
	I0318 20:50:06.365047   21691 main.go:141] libmachine: (ha-315064-m03) <domain type='kvm'>
	I0318 20:50:06.365054   21691 main.go:141] libmachine: (ha-315064-m03)   <name>ha-315064-m03</name>
	I0318 20:50:06.365059   21691 main.go:141] libmachine: (ha-315064-m03)   <memory unit='MiB'>2200</memory>
	I0318 20:50:06.365068   21691 main.go:141] libmachine: (ha-315064-m03)   <vcpu>2</vcpu>
	I0318 20:50:06.365079   21691 main.go:141] libmachine: (ha-315064-m03)   <features>
	I0318 20:50:06.365090   21691 main.go:141] libmachine: (ha-315064-m03)     <acpi/>
	I0318 20:50:06.365100   21691 main.go:141] libmachine: (ha-315064-m03)     <apic/>
	I0318 20:50:06.365115   21691 main.go:141] libmachine: (ha-315064-m03)     <pae/>
	I0318 20:50:06.365124   21691 main.go:141] libmachine: (ha-315064-m03)     
	I0318 20:50:06.365136   21691 main.go:141] libmachine: (ha-315064-m03)   </features>
	I0318 20:50:06.365146   21691 main.go:141] libmachine: (ha-315064-m03)   <cpu mode='host-passthrough'>
	I0318 20:50:06.365157   21691 main.go:141] libmachine: (ha-315064-m03)   
	I0318 20:50:06.365166   21691 main.go:141] libmachine: (ha-315064-m03)   </cpu>
	I0318 20:50:06.365176   21691 main.go:141] libmachine: (ha-315064-m03)   <os>
	I0318 20:50:06.365183   21691 main.go:141] libmachine: (ha-315064-m03)     <type>hvm</type>
	I0318 20:50:06.365195   21691 main.go:141] libmachine: (ha-315064-m03)     <boot dev='cdrom'/>
	I0318 20:50:06.365205   21691 main.go:141] libmachine: (ha-315064-m03)     <boot dev='hd'/>
	I0318 20:50:06.365214   21691 main.go:141] libmachine: (ha-315064-m03)     <bootmenu enable='no'/>
	I0318 20:50:06.365228   21691 main.go:141] libmachine: (ha-315064-m03)   </os>
	I0318 20:50:06.365254   21691 main.go:141] libmachine: (ha-315064-m03)   <devices>
	I0318 20:50:06.365277   21691 main.go:141] libmachine: (ha-315064-m03)     <disk type='file' device='cdrom'>
	I0318 20:50:06.365293   21691 main.go:141] libmachine: (ha-315064-m03)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/boot2docker.iso'/>
	I0318 20:50:06.365305   21691 main.go:141] libmachine: (ha-315064-m03)       <target dev='hdc' bus='scsi'/>
	I0318 20:50:06.365317   21691 main.go:141] libmachine: (ha-315064-m03)       <readonly/>
	I0318 20:50:06.365327   21691 main.go:141] libmachine: (ha-315064-m03)     </disk>
	I0318 20:50:06.365342   21691 main.go:141] libmachine: (ha-315064-m03)     <disk type='file' device='disk'>
	I0318 20:50:06.365364   21691 main.go:141] libmachine: (ha-315064-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 20:50:06.365384   21691 main.go:141] libmachine: (ha-315064-m03)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/ha-315064-m03.rawdisk'/>
	I0318 20:50:06.365395   21691 main.go:141] libmachine: (ha-315064-m03)       <target dev='hda' bus='virtio'/>
	I0318 20:50:06.365406   21691 main.go:141] libmachine: (ha-315064-m03)     </disk>
	I0318 20:50:06.365419   21691 main.go:141] libmachine: (ha-315064-m03)     <interface type='network'>
	I0318 20:50:06.365432   21691 main.go:141] libmachine: (ha-315064-m03)       <source network='mk-ha-315064'/>
	I0318 20:50:06.365444   21691 main.go:141] libmachine: (ha-315064-m03)       <model type='virtio'/>
	I0318 20:50:06.365453   21691 main.go:141] libmachine: (ha-315064-m03)     </interface>
	I0318 20:50:06.365465   21691 main.go:141] libmachine: (ha-315064-m03)     <interface type='network'>
	I0318 20:50:06.365474   21691 main.go:141] libmachine: (ha-315064-m03)       <source network='default'/>
	I0318 20:50:06.365486   21691 main.go:141] libmachine: (ha-315064-m03)       <model type='virtio'/>
	I0318 20:50:06.365495   21691 main.go:141] libmachine: (ha-315064-m03)     </interface>
	I0318 20:50:06.365503   21691 main.go:141] libmachine: (ha-315064-m03)     <serial type='pty'>
	I0318 20:50:06.365514   21691 main.go:141] libmachine: (ha-315064-m03)       <target port='0'/>
	I0318 20:50:06.365526   21691 main.go:141] libmachine: (ha-315064-m03)     </serial>
	I0318 20:50:06.365536   21691 main.go:141] libmachine: (ha-315064-m03)     <console type='pty'>
	I0318 20:50:06.365548   21691 main.go:141] libmachine: (ha-315064-m03)       <target type='serial' port='0'/>
	I0318 20:50:06.365560   21691 main.go:141] libmachine: (ha-315064-m03)     </console>
	I0318 20:50:06.365599   21691 main.go:141] libmachine: (ha-315064-m03)     <rng model='virtio'>
	I0318 20:50:06.365621   21691 main.go:141] libmachine: (ha-315064-m03)       <backend model='random'>/dev/random</backend>
	I0318 20:50:06.365632   21691 main.go:141] libmachine: (ha-315064-m03)     </rng>
	I0318 20:50:06.365639   21691 main.go:141] libmachine: (ha-315064-m03)     
	I0318 20:50:06.365647   21691 main.go:141] libmachine: (ha-315064-m03)     
	I0318 20:50:06.365654   21691 main.go:141] libmachine: (ha-315064-m03)   </devices>
	I0318 20:50:06.365662   21691 main.go:141] libmachine: (ha-315064-m03) </domain>
	I0318 20:50:06.365668   21691 main.go:141] libmachine: (ha-315064-m03) 
	I0318 20:50:06.371877   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:0f:e1:d4 in network default
	I0318 20:50:06.372408   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:06.372426   21691 main.go:141] libmachine: (ha-315064-m03) Ensuring networks are active...
	I0318 20:50:06.373001   21691 main.go:141] libmachine: (ha-315064-m03) Ensuring network default is active
	I0318 20:50:06.373284   21691 main.go:141] libmachine: (ha-315064-m03) Ensuring network mk-ha-315064 is active
	I0318 20:50:06.373590   21691 main.go:141] libmachine: (ha-315064-m03) Getting domain xml...
	I0318 20:50:06.374214   21691 main.go:141] libmachine: (ha-315064-m03) Creating domain...
	I0318 20:50:07.558873   21691 main.go:141] libmachine: (ha-315064-m03) Waiting to get IP...
	I0318 20:50:07.559660   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:07.560040   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:07.560068   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:07.559990   22592 retry.go:31] will retry after 310.268269ms: waiting for machine to come up
	I0318 20:50:07.872329   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:07.872701   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:07.872728   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:07.872680   22592 retry.go:31] will retry after 354.462724ms: waiting for machine to come up
	I0318 20:50:08.229217   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:08.229653   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:08.229698   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:08.229616   22592 retry.go:31] will retry after 319.179586ms: waiting for machine to come up
	I0318 20:50:08.549953   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:08.550351   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:08.550380   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:08.550323   22592 retry.go:31] will retry after 573.57697ms: waiting for machine to come up
	I0318 20:50:09.125080   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:09.125557   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:09.125578   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:09.125520   22592 retry.go:31] will retry after 568.689512ms: waiting for machine to come up
	I0318 20:50:09.696601   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:09.697117   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:09.697144   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:09.697063   22592 retry.go:31] will retry after 804.121348ms: waiting for machine to come up
	I0318 20:50:10.502794   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:10.503186   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:10.503212   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:10.503136   22592 retry.go:31] will retry after 1.129772692s: waiting for machine to come up
	I0318 20:50:11.633833   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:11.634303   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:11.634329   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:11.634258   22592 retry.go:31] will retry after 1.01162733s: waiting for machine to come up
	I0318 20:50:12.647391   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:12.647797   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:12.647826   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:12.647764   22592 retry.go:31] will retry after 1.148388807s: waiting for machine to come up
	I0318 20:50:13.797943   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:13.798312   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:13.798334   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:13.798267   22592 retry.go:31] will retry after 2.323236456s: waiting for machine to come up
	I0318 20:50:16.123668   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:16.124130   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:16.124158   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:16.124076   22592 retry.go:31] will retry after 2.064821918s: waiting for machine to come up
	I0318 20:50:18.189927   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:18.190475   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:18.190504   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:18.190399   22592 retry.go:31] will retry after 2.594877199s: waiting for machine to come up
	I0318 20:50:20.786623   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:20.787084   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:20.787112   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:20.787044   22592 retry.go:31] will retry after 3.538825148s: waiting for machine to come up
	I0318 20:50:24.327462   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:24.327890   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find current IP address of domain ha-315064-m03 in network mk-ha-315064
	I0318 20:50:24.327916   21691 main.go:141] libmachine: (ha-315064-m03) DBG | I0318 20:50:24.327849   22592 retry.go:31] will retry after 5.508050331s: waiting for machine to come up
	I0318 20:50:29.838279   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.838872   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.838894   21691 main.go:141] libmachine: (ha-315064-m03) Found IP for machine: 192.168.39.84
	I0318 20:50:29.838907   21691 main.go:141] libmachine: (ha-315064-m03) Reserving static IP address...
	I0318 20:50:29.839355   21691 main.go:141] libmachine: (ha-315064-m03) DBG | unable to find host DHCP lease matching {name: "ha-315064-m03", mac: "52:54:00:9e:ed:fb", ip: "192.168.39.84"} in network mk-ha-315064
	I0318 20:50:29.908146   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Getting to WaitForSSH function...
	I0318 20:50:29.908184   21691 main.go:141] libmachine: (ha-315064-m03) Reserved static IP address: 192.168.39.84
	I0318 20:50:29.908198   21691 main.go:141] libmachine: (ha-315064-m03) Waiting for SSH to be available...
	I0318 20:50:29.910745   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.911170   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:29.911192   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:29.911409   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Using SSH client type: external
	I0318 20:50:29.911426   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa (-rw-------)
	I0318 20:50:29.911450   21691 main.go:141] libmachine: (ha-315064-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 20:50:29.911463   21691 main.go:141] libmachine: (ha-315064-m03) DBG | About to run SSH command:
	I0318 20:50:29.911480   21691 main.go:141] libmachine: (ha-315064-m03) DBG | exit 0
	I0318 20:50:30.036845   21691 main.go:141] libmachine: (ha-315064-m03) DBG | SSH cmd err, output: <nil>: 
	I0318 20:50:30.037156   21691 main.go:141] libmachine: (ha-315064-m03) KVM machine creation complete!
	I0318 20:50:30.037500   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetConfigRaw
	I0318 20:50:30.037993   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:30.038164   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:30.038337   21691 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 20:50:30.038357   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:50:30.039802   21691 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 20:50:30.039820   21691 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 20:50:30.039828   21691 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 20:50:30.039837   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.041955   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.042325   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.042358   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.042508   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.042685   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.042857   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.042976   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.043135   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.043322   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.043333   21691 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 20:50:30.148306   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:50:30.148330   21691 main.go:141] libmachine: Detecting the provisioner...
	I0318 20:50:30.148337   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.151079   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.151470   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.151508   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.151697   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.151852   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.151955   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.152041   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.152144   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.152303   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.152317   21691 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 20:50:30.266017   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 20:50:30.266090   21691 main.go:141] libmachine: found compatible host: buildroot
	I0318 20:50:30.266104   21691 main.go:141] libmachine: Provisioning with buildroot...
	I0318 20:50:30.266116   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:30.266346   21691 buildroot.go:166] provisioning hostname "ha-315064-m03"
	I0318 20:50:30.266367   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:30.266550   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.269184   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.269593   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.269622   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.269732   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.269875   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.270030   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.270182   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.270359   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.270540   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.270557   21691 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064-m03 && echo "ha-315064-m03" | sudo tee /etc/hostname
	I0318 20:50:30.388652   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064-m03
	
	I0318 20:50:30.388688   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.391569   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.391986   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.392021   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.392165   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.392325   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.392456   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.392603   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.392764   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.393073   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.393096   21691 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:50:30.515326   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:50:30.515359   21691 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:50:30.515378   21691 buildroot.go:174] setting up certificates
	I0318 20:50:30.515390   21691 provision.go:84] configureAuth start
	I0318 20:50:30.515404   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetMachineName
	I0318 20:50:30.515737   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:30.518516   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.518911   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.518949   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.519121   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.521377   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.521727   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.521754   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.521873   21691 provision.go:143] copyHostCerts
	I0318 20:50:30.521901   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:50:30.521939   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:50:30.521950   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:50:30.522029   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:50:30.522114   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:50:30.522139   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:50:30.522149   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:50:30.522187   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:50:30.522246   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:50:30.522269   21691 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:50:30.522278   21691 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:50:30.522311   21691 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:50:30.522384   21691 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064-m03 san=[127.0.0.1 192.168.39.84 ha-315064-m03 localhost minikube]
	I0318 20:50:30.629470   21691 provision.go:177] copyRemoteCerts
	I0318 20:50:30.629534   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:50:30.629562   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.631999   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.632281   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.632306   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.632486   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.632687   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.632840   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.633053   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:30.718114   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:50:30.718193   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:50:30.752753   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:50:30.752825   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 20:50:30.781611   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:50:30.781691   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 20:50:30.809989   21691 provision.go:87] duration metric: took 294.58642ms to configureAuth
	I0318 20:50:30.810018   21691 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:50:30.810222   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:50:30.810296   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:30.812815   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.813186   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:30.813208   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:30.813386   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:30.813551   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.813705   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:30.813811   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:30.813966   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:30.814126   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:30.814140   21691 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:50:31.114750   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:50:31.114780   21691 main.go:141] libmachine: Checking connection to Docker...
	I0318 20:50:31.114791   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetURL
	I0318 20:50:31.116168   21691 main.go:141] libmachine: (ha-315064-m03) DBG | Using libvirt version 6000000
	I0318 20:50:31.118277   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.118638   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.118661   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.118812   21691 main.go:141] libmachine: Docker is up and running!
	I0318 20:50:31.118839   21691 main.go:141] libmachine: Reticulating splines...
	I0318 20:50:31.118847   21691 client.go:171] duration metric: took 25.355588031s to LocalClient.Create
	I0318 20:50:31.118877   21691 start.go:167] duration metric: took 25.35565225s to libmachine.API.Create "ha-315064"
	I0318 20:50:31.118885   21691 start.go:293] postStartSetup for "ha-315064-m03" (driver="kvm2")
	I0318 20:50:31.118895   21691 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:50:31.118911   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.119130   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:50:31.119156   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:31.121250   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.121638   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.121667   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.121814   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.122007   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.122175   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.122346   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:31.212613   21691 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:50:31.217241   21691 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:50:31.217260   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:50:31.217313   21691 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:50:31.217380   21691 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:50:31.217388   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:50:31.217469   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:50:31.228138   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:50:31.255268   21691 start.go:296] duration metric: took 136.370593ms for postStartSetup
	I0318 20:50:31.255318   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetConfigRaw
	I0318 20:50:31.255859   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:31.258428   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.258767   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.258785   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.259020   21691 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:50:31.259244   21691 start.go:128] duration metric: took 25.513966628s to createHost
	I0318 20:50:31.259273   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:31.261403   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.261787   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.261819   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.261945   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.262175   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.262367   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.262521   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.262693   21691 main.go:141] libmachine: Using SSH client type: native
	I0318 20:50:31.262853   21691 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0318 20:50:31.262863   21691 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:50:31.370072   21691 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710795031.353735337
	
	I0318 20:50:31.370095   21691 fix.go:216] guest clock: 1710795031.353735337
	I0318 20:50:31.370105   21691 fix.go:229] Guest: 2024-03-18 20:50:31.353735337 +0000 UTC Remote: 2024-03-18 20:50:31.259259981 +0000 UTC m=+249.418535446 (delta=94.475356ms)
	I0318 20:50:31.370123   21691 fix.go:200] guest clock delta is within tolerance: 94.475356ms
	I0318 20:50:31.370130   21691 start.go:83] releasing machines lock for "ha-315064-m03", held for 25.625002302s
	I0318 20:50:31.370151   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.370414   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:31.373240   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.373608   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.373637   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.375918   21691 out.go:177] * Found network options:
	I0318 20:50:31.377189   21691 out.go:177]   - NO_PROXY=192.168.39.79,192.168.39.231
	W0318 20:50:31.378336   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 20:50:31.378361   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:50:31.378373   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.378852   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.379029   21691 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:50:31.379128   21691 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:50:31.379165   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	W0318 20:50:31.379201   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 20:50:31.379226   21691 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 20:50:31.379296   21691 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:50:31.379317   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:50:31.381801   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382183   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.382211   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382230   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382377   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.382545   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.382628   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:31.382651   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:31.382695   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.382782   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:50:31.382836   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:31.382951   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:50:31.383084   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:50:31.383237   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:50:31.627234   21691 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:50:31.635144   21691 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:50:31.635199   21691 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:50:31.653653   21691 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 20:50:31.653671   21691 start.go:494] detecting cgroup driver to use...
	I0318 20:50:31.653734   21691 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:50:31.672558   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:50:31.687818   21691 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:50:31.687863   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:50:31.702492   21691 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:50:31.716665   21691 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:50:31.847630   21691 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:50:32.006944   21691 docker.go:233] disabling docker service ...
	I0318 20:50:32.007019   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:50:32.024873   21691 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:50:32.038915   21691 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:50:32.184898   21691 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:50:32.305816   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:50:32.322666   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:50:32.345134   21691 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:50:32.345197   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.357483   21691 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:50:32.357536   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.368637   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.379719   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.390478   21691 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:50:32.401809   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.412809   21691 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.431659   21691 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:50:32.442734   21691 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:50:32.452890   21691 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 20:50:32.452961   21691 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 20:50:32.467849   21691 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:50:32.481434   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:50:32.613711   21691 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:50:32.767212   21691 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:50:32.767288   21691 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:50:32.773709   21691 start.go:562] Will wait 60s for crictl version
	I0318 20:50:32.773775   21691 ssh_runner.go:195] Run: which crictl
	I0318 20:50:32.778241   21691 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:50:32.822124   21691 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:50:32.822194   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:50:32.857225   21691 ssh_runner.go:195] Run: crio --version
	I0318 20:50:32.889870   21691 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:50:32.891335   21691 out.go:177]   - env NO_PROXY=192.168.39.79
	I0318 20:50:32.892662   21691 out.go:177]   - env NO_PROXY=192.168.39.79,192.168.39.231
	I0318 20:50:32.893811   21691 main.go:141] libmachine: (ha-315064-m03) Calling .GetIP
	I0318 20:50:32.896659   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:32.897093   21691 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:50:32.897122   21691 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:50:32.897332   21691 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:50:32.901886   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:50:32.915359   21691 mustload.go:65] Loading cluster: ha-315064
	I0318 20:50:32.915552   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:50:32.915834   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:32.915875   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:32.930960   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0318 20:50:32.931427   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:32.931856   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:32.931875   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:32.932159   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:32.932342   21691 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:50:32.933792   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:50:32.934068   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:32.934106   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:32.949583   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0318 20:50:32.949956   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:32.950373   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:32.950395   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:32.950754   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:32.950976   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:50:32.951137   21691 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.84
	I0318 20:50:32.951149   21691 certs.go:194] generating shared ca certs ...
	I0318 20:50:32.951162   21691 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:50:32.951288   21691 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:50:32.951329   21691 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:50:32.951338   21691 certs.go:256] generating profile certs ...
	I0318 20:50:32.951404   21691 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:50:32.951429   21691 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64
	I0318 20:50:32.951442   21691 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.231 192.168.39.84 192.168.39.254]
	I0318 20:50:33.397550   21691 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64 ...
	I0318 20:50:33.397576   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64: {Name:mk1cf00bed9b040075db0bab18edcf4ebf6316c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:50:33.397729   21691 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64 ...
	I0318 20:50:33.397745   21691 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64: {Name:mkb9badf278f9f48de743fb3bc639185b71cdad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:50:33.397809   21691 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.e2004a64 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:50:33.397934   21691 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.e2004a64 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:50:33.398052   21691 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:50:33.398068   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:50:33.398080   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:50:33.398093   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:50:33.398107   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:50:33.398119   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:50:33.398131   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:50:33.398142   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:50:33.398157   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:50:33.398206   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:50:33.398237   21691 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:50:33.398247   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:50:33.398268   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:50:33.398287   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:50:33.398306   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:50:33.398343   21691 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:50:33.398370   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:50:33.398389   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:33.398406   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:50:33.398435   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:50:33.401437   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:33.401817   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:50:33.401847   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:33.402013   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:50:33.402201   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:50:33.402340   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:50:33.402517   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:50:33.477288   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 20:50:33.482978   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 20:50:33.495000   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 20:50:33.500369   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0318 20:50:33.516823   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 20:50:33.521352   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 20:50:33.544216   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 20:50:33.549753   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 20:50:33.561485   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 20:50:33.566199   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 20:50:33.578047   21691 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 20:50:33.582638   21691 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0318 20:50:33.594502   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:50:33.626948   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:50:33.655870   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:50:33.684351   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:50:33.711402   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 20:50:33.739421   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 20:50:33.765512   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:50:33.793246   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:50:33.819281   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:50:33.845654   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:50:33.873533   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:50:33.901553   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 20:50:33.920635   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0318 20:50:33.945822   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 20:50:33.965276   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 20:50:33.984548   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 20:50:34.004526   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0318 20:50:34.023454   21691 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 20:50:34.042214   21691 ssh_runner.go:195] Run: openssl version
	I0318 20:50:34.048500   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:50:34.061745   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:50:34.067023   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:50:34.067070   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:50:34.073352   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:50:34.085625   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:50:34.098824   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:34.103704   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:34.103758   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:50:34.110213   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:50:34.122051   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:50:34.133943   21691 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:50:34.139035   21691 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:50:34.139074   21691 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:50:34.145671   21691 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:50:34.158592   21691 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:50:34.163032   21691 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 20:50:34.163090   21691 kubeadm.go:928] updating node {m03 192.168.39.84 8443 v1.28.4 crio true true} ...
	I0318 20:50:34.163173   21691 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:50:34.163207   21691 kube-vip.go:111] generating kube-vip config ...
	I0318 20:50:34.163233   21691 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:50:34.182135   21691 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:50:34.182201   21691 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:50:34.182256   21691 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:50:34.197019   21691 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 20:50:34.197073   21691 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 20:50:34.208343   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 20:50:34.208394   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 20:50:34.208363   21691 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 20:50:34.208412   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:50:34.208442   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:50:34.208475   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 20:50:34.208403   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:50:34.208543   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 20:50:34.222975   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 20:50:34.223006   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 20:50:34.223009   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 20:50:34.223020   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 20:50:34.254646   21691 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:50:34.254734   21691 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 20:50:34.352100   21691 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 20:50:34.352146   21691 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 20:50:35.310220   21691 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 20:50:35.321490   21691 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0318 20:50:35.340489   21691 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:50:35.358546   21691 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:50:35.376766   21691 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:50:35.381356   21691 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 20:50:35.395351   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:50:35.529705   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:50:35.552088   21691 host.go:66] Checking if "ha-315064" exists ...
	I0318 20:50:35.552397   21691 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:50:35.552433   21691 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:50:35.567606   21691 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0318 20:50:35.567950   21691 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:50:35.568449   21691 main.go:141] libmachine: Using API Version  1
	I0318 20:50:35.568483   21691 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:50:35.568804   21691 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:50:35.569031   21691 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:50:35.569200   21691 start.go:316] joinCluster: &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:50:35.569333   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 20:50:35.569352   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:50:35.572512   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:35.572940   21691 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:50:35.572957   21691 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:50:35.573167   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:50:35.573381   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:50:35.573534   21691 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:50:35.573657   21691 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:50:35.746637   21691 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:50:35.746696   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q9srpv.jjvmgylq5he4abea --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I0318 20:51:02.069561   21691 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q9srpv.jjvmgylq5he4abea --discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-315064-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (26.322835647s)
	I0318 20:51:02.069596   21691 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 20:51:02.758565   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-315064-m03 minikube.k8s.io/updated_at=2024_03_18T20_51_02_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=ha-315064 minikube.k8s.io/primary=false
	I0318 20:51:02.904468   21691 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-315064-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 20:51:03.037245   21691 start.go:318] duration metric: took 27.468042909s to joinCluster
	I0318 20:51:03.037325   21691 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 20:51:03.038867   21691 out.go:177] * Verifying Kubernetes components...
	I0318 20:51:03.037612   21691 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:51:03.040194   21691 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:51:03.261813   21691 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:51:03.304329   21691 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:51:03.304606   21691 kapi.go:59] client config for ha-315064: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 20:51:03.304657   21691 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.79:8443
	I0318 20:51:03.304869   21691 node_ready.go:35] waiting up to 6m0s for node "ha-315064-m03" to be "Ready" ...
	I0318 20:51:03.304966   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:03.304976   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:03.304985   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:03.304991   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:03.309985   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:03.805065   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:03.805085   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:03.805094   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:03.805099   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:03.809302   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:04.305047   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:04.305065   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:04.305073   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:04.305077   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:04.308973   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:04.805065   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:04.805089   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:04.805096   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:04.805100   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:04.809061   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:05.305087   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:05.305111   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:05.305123   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:05.305128   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:05.310505   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:05.311354   21691 node_ready.go:53] node "ha-315064-m03" has status "Ready":"False"
	I0318 20:51:05.805137   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:05.805158   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:05.805165   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:05.805169   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:05.808959   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:06.305962   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:06.305980   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:06.305988   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:06.305992   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:06.309603   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:06.805527   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:06.805551   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:06.805561   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:06.805569   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:06.809258   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:07.305941   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:07.305968   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:07.305980   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:07.305987   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:07.309210   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:07.806027   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:07.806054   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:07.806064   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:07.806071   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:07.810333   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:07.810847   21691 node_ready.go:53] node "ha-315064-m03" has status "Ready":"False"
	I0318 20:51:08.306120   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:08.306144   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:08.306154   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:08.306158   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:08.310547   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:08.805691   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:08.805712   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:08.805719   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:08.805723   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:08.809531   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:09.305280   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:09.305303   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:09.305312   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:09.305319   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:09.309264   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:09.805041   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:09.805061   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:09.805069   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:09.805075   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:09.808751   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:10.306092   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:10.306119   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:10.306126   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:10.306132   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:10.311801   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:10.313845   21691 node_ready.go:53] node "ha-315064-m03" has status "Ready":"False"
	I0318 20:51:10.805491   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:10.805511   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:10.805518   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:10.805522   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:10.809807   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:11.305002   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:11.305021   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.305029   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.305032   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.308395   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.309096   21691 node_ready.go:49] node "ha-315064-m03" has status "Ready":"True"
	I0318 20:51:11.309117   21691 node_ready.go:38] duration metric: took 8.004232778s for node "ha-315064-m03" to be "Ready" ...
	I0318 20:51:11.309127   21691 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:51:11.309190   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:11.309205   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.309215   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.309222   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.316399   21691 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 20:51:11.323391   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.323458   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fgqzg
	I0318 20:51:11.323467   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.323474   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.323479   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.326209   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.326859   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:11.326874   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.326885   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.326892   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.329697   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.330177   21691 pod_ready.go:92] pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.330192   21691 pod_ready.go:81] duration metric: took 6.780065ms for pod "coredns-5dd5756b68-fgqzg" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.330199   21691 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.330250   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hrrzn
	I0318 20:51:11.330260   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.330267   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.330273   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.332706   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.333325   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:11.333336   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.333342   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.333346   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.336535   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.337452   21691 pod_ready.go:92] pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.337465   21691 pod_ready.go:81] duration metric: took 7.25922ms for pod "coredns-5dd5756b68-hrrzn" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.337473   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.337507   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064
	I0318 20:51:11.337513   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.337520   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.337524   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.340356   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.340794   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:11.340807   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.340814   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.340817   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.343293   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.343698   21691 pod_ready.go:92] pod "etcd-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.343712   21691 pod_ready.go:81] duration metric: took 6.234619ms for pod "etcd-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.343720   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.343758   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m02
	I0318 20:51:11.343765   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.343771   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.343786   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.346392   21691 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 20:51:11.346888   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:11.346900   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.346906   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.346910   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.350443   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.350866   21691 pod_ready.go:92] pod "etcd-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:11.350880   21691 pod_ready.go:81] duration metric: took 7.154681ms for pod "etcd-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.350887   21691 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:11.505022   21691 request.go:629] Waited for 154.08429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.505091   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.505102   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.505114   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.505142   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.509099   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.705008   21691 request.go:629] Waited for 195.277006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:11.705063   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:11.705068   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.705078   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.705083   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.709058   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:11.905811   21691 request.go:629] Waited for 54.55273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.905863   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:11.905875   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:11.905882   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:11.905887   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:11.910034   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:12.106077   21691 request.go:629] Waited for 195.428399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.106130   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.106135   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.106143   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.106146   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.110591   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:12.351505   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:12.351525   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.351534   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.351539   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.355747   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:12.505815   21691 request.go:629] Waited for 149.297639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.505890   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.505896   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.505903   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.505908   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.509667   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:12.851220   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:12.851240   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.851251   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.851256   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.855159   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:12.906088   21691 request.go:629] Waited for 50.18813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.906158   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:12.906163   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:12.906170   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:12.906174   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:12.910097   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:13.351074   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:13.351095   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.351106   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.351115   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.356864   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:13.358162   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:13.358183   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.358194   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.358202   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.362613   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:13.363390   21691 pod_ready.go:102] pod "etcd-ha-315064-m03" in "kube-system" namespace has status "Ready":"False"
	I0318 20:51:13.851837   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:13.851869   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.851881   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.851887   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.855894   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:13.857155   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:13.857173   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:13.857185   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:13.857192   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:13.860373   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:14.351741   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:14.351767   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.351778   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.351782   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.356075   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:14.357252   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:14.357267   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.357277   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.357287   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.360811   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:14.851572   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:14.851603   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.851614   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.851620   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.857549   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:14.858215   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:14.858228   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:14.858236   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:14.858239   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:14.861864   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:15.351994   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:15.352015   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.352023   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.352027   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.358560   21691 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 20:51:15.359516   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:15.359532   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.359543   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.359546   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.364531   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:15.365243   21691 pod_ready.go:102] pod "etcd-ha-315064-m03" in "kube-system" namespace has status "Ready":"False"
	I0318 20:51:15.851857   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:15.851884   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.851892   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.851901   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.856607   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:15.857643   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:15.857658   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:15.857665   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:15.857671   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:15.861451   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:16.351875   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:16.351901   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.351913   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.351920   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.356532   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:16.357258   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:16.357275   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.357281   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.357286   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.360459   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:16.851441   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/etcd-ha-315064-m03
	I0318 20:51:16.851465   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.851477   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.851481   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.856511   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:16.857694   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:16.857715   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.857727   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.857731   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.862736   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:16.863361   21691 pod_ready.go:92] pod "etcd-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:16.863393   21691 pod_ready.go:81] duration metric: took 5.512499323s for pod "etcd-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.863418   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.863559   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064
	I0318 20:51:16.863572   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.863582   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.863587   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.867383   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:16.868157   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:16.868171   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.868181   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.868187   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.878385   21691 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 20:51:16.879349   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:16.879372   21691 pod_ready.go:81] duration metric: took 15.941575ms for pod "kube-apiserver-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.879386   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.879459   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m02
	I0318 20:51:16.879470   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.879480   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.879491   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.890541   21691 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 20:51:16.905819   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:16.905855   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:16.905863   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:16.905868   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:16.910649   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:16.911519   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:16.911537   21691 pod_ready.go:81] duration metric: took 32.143615ms for pod "kube-apiserver-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:16.911549   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.106023   21691 request.go:629] Waited for 194.404237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m03
	I0318 20:51:17.106123   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-315064-m03
	I0318 20:51:17.106132   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.106143   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.106156   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.110182   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:17.305434   21691 request.go:629] Waited for 194.408349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:17.305525   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:17.305536   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.305545   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.305551   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.310110   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:17.310746   21691 pod_ready.go:92] pod "kube-apiserver-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:17.310763   21691 pod_ready.go:81] duration metric: took 399.206242ms for pod "kube-apiserver-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.310772   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.505872   21691 request.go:629] Waited for 195.015201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:51:17.505933   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064
	I0318 20:51:17.505940   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.505952   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.505960   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.511609   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:17.705699   21691 request.go:629] Waited for 193.371749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:17.705756   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:17.705763   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.705773   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.705781   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.709793   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:17.710611   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:17.710630   21691 pod_ready.go:81] duration metric: took 399.850652ms for pod "kube-controller-manager-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.710644   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:17.905618   21691 request.go:629] Waited for 194.912966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:51:17.905702   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m02
	I0318 20:51:17.905711   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:17.905719   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:17.905726   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:17.909529   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.106073   21691 request.go:629] Waited for 195.715176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.106133   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.106138   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.106152   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.106156   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.110132   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.110948   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:18.110965   21691 pod_ready.go:81] duration metric: took 400.313992ms for pod "kube-controller-manager-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.110975   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.305009   21691 request.go:629] Waited for 193.97322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m03
	I0318 20:51:18.305072   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-315064-m03
	I0318 20:51:18.305077   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.305084   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.305089   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.308581   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.505880   21691 request.go:629] Waited for 196.37643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:18.505932   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:18.505937   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.505944   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.505948   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.510487   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:18.511246   21691 pod_ready.go:92] pod "kube-controller-manager-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:18.511260   21691 pod_ready.go:81] duration metric: took 400.279961ms for pod "kube-controller-manager-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.511270   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.705350   21691 request.go:629] Waited for 194.030068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:51:18.705441   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bccjj
	I0318 20:51:18.705451   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.705463   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.705470   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.710633   21691 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 20:51:18.905774   21691 request.go:629] Waited for 194.350073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.905832   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:18.905859   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:18.905875   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:18.905880   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:18.909529   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:18.910316   21691 pod_ready.go:92] pod "kube-proxy-bccjj" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:18.910334   21691 pod_ready.go:81] duration metric: took 399.057772ms for pod "kube-proxy-bccjj" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:18.910347   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nf4sq" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.105376   21691 request.go:629] Waited for 194.966609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nf4sq
	I0318 20:51:19.105445   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nf4sq
	I0318 20:51:19.105454   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.105467   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.105476   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.109039   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:19.305405   21691 request.go:629] Waited for 195.350108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:19.305468   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:19.305478   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.305491   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.305501   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.309525   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:19.310272   21691 pod_ready.go:92] pod "kube-proxy-nf4sq" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:19.310294   21691 pod_ready.go:81] duration metric: took 399.938335ms for pod "kube-proxy-nf4sq" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.310307   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.505355   21691 request.go:629] Waited for 194.963644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:51:19.505409   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrm24
	I0318 20:51:19.505414   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.505425   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.505429   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.510095   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:19.705193   21691 request.go:629] Waited for 194.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:19.705261   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:19.705266   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.705274   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.705280   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.710136   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:19.711409   21691 pod_ready.go:92] pod "kube-proxy-wrm24" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:19.711428   21691 pod_ready.go:81] duration metric: took 401.113388ms for pod "kube-proxy-wrm24" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.711440   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:19.905593   21691 request.go:629] Waited for 194.087403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:51:19.905659   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064
	I0318 20:51:19.905666   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:19.905675   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:19.905687   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:19.908724   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.105780   21691 request.go:629] Waited for 196.345738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:20.105842   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064
	I0318 20:51:20.105849   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.105867   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.105875   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.109845   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.110571   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:20.110590   21691 pod_ready.go:81] duration metric: took 399.142924ms for pod "kube-scheduler-ha-315064" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.110599   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.305624   21691 request.go:629] Waited for 194.961747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:51:20.305680   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m02
	I0318 20:51:20.305686   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.305693   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.305697   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.309543   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.505549   21691 request.go:629] Waited for 195.407491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:20.505612   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m02
	I0318 20:51:20.505617   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.505625   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.505629   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.508929   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.509783   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:20.509802   21691 pod_ready.go:81] duration metric: took 399.194649ms for pod "kube-scheduler-ha-315064-m02" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.509812   21691 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.705149   21691 request.go:629] Waited for 195.28478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m03
	I0318 20:51:20.705205   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-315064-m03
	I0318 20:51:20.705210   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.705217   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.705222   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.709528   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:20.905742   21691 request.go:629] Waited for 195.357574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:20.905809   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes/ha-315064-m03
	I0318 20:51:20.905816   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.905826   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.905835   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.909571   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:20.910180   21691 pod_ready.go:92] pod "kube-scheduler-ha-315064-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 20:51:20.910196   21691 pod_ready.go:81] duration metric: took 400.378831ms for pod "kube-scheduler-ha-315064-m03" in "kube-system" namespace to be "Ready" ...
	I0318 20:51:20.910206   21691 pod_ready.go:38] duration metric: took 9.601068459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 20:51:20.910226   21691 api_server.go:52] waiting for apiserver process to appear ...
	I0318 20:51:20.910272   21691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 20:51:20.926940   21691 api_server.go:72] duration metric: took 17.889578919s to wait for apiserver process to appear ...
	I0318 20:51:20.926962   21691 api_server.go:88] waiting for apiserver healthz status ...
	I0318 20:51:20.926978   21691 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I0318 20:51:20.931787   21691 api_server.go:279] https://192.168.39.79:8443/healthz returned 200:
	ok
	I0318 20:51:20.931838   21691 round_trippers.go:463] GET https://192.168.39.79:8443/version
	I0318 20:51:20.931843   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:20.931850   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:20.931854   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:20.933159   21691 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 20:51:20.933311   21691 api_server.go:141] control plane version: v1.28.4
	I0318 20:51:20.933329   21691 api_server.go:131] duration metric: took 6.360085ms to wait for apiserver health ...
	I0318 20:51:20.933339   21691 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 20:51:21.105713   21691 request.go:629] Waited for 172.311357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.105761   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.105772   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.105798   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.105804   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.113904   21691 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 20:51:21.120645   21691 system_pods.go:59] 24 kube-system pods found
	I0318 20:51:21.120676   21691 system_pods.go:61] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:51:21.120683   21691 system_pods.go:61] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:51:21.120689   21691 system_pods.go:61] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:51:21.120695   21691 system_pods.go:61] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:51:21.120701   21691 system_pods.go:61] "etcd-ha-315064-m03" [e59c305c-3942-4ac0-a78b-7f393410a0c4] Running
	I0318 20:51:21.120706   21691 system_pods.go:61] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:51:21.120712   21691 system_pods.go:61] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:51:21.120718   21691 system_pods.go:61] "kindnet-x8cpw" [19931ea9-b153-46b1-af81-56634a6a1c87] Running
	I0318 20:51:21.120724   21691 system_pods.go:61] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:51:21.120730   21691 system_pods.go:61] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:51:21.120737   21691 system_pods.go:61] "kube-apiserver-ha-315064-m03" [ed0be9ce-fa97-441b-8791-5ee60a9d5382] Running
	I0318 20:51:21.120747   21691 system_pods.go:61] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:51:21.120754   21691 system_pods.go:61] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:51:21.120765   21691 system_pods.go:61] "kube-controller-manager-ha-315064-m03" [8ad4a754-6e8d-40f5-8348-47dbbf678066] Running
	I0318 20:51:21.120771   21691 system_pods.go:61] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:51:21.120777   21691 system_pods.go:61] "kube-proxy-nf4sq" [4acc350a-a057-4bdb-9d95-ee583b48fe33] Running
	I0318 20:51:21.120784   21691 system_pods.go:61] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:51:21.120792   21691 system_pods.go:61] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:51:21.120799   21691 system_pods.go:61] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:51:21.120805   21691 system_pods.go:61] "kube-scheduler-ha-315064-m03" [0917880d-4c3d-452b-89b7-567674a24298] Running
	I0318 20:51:21.120811   21691 system_pods.go:61] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:51:21.120820   21691 system_pods.go:61] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:51:21.120826   21691 system_pods.go:61] "kube-vip-ha-315064-m03" [0d376644-8c01-4b2f-b3da-337bf602d246] Running
	I0318 20:51:21.120832   21691 system_pods.go:61] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:51:21.120841   21691 system_pods.go:74] duration metric: took 187.495665ms to wait for pod list to return data ...
	I0318 20:51:21.120855   21691 default_sa.go:34] waiting for default service account to be created ...
	I0318 20:51:21.305301   21691 request.go:629] Waited for 184.350388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:51:21.305367   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/default/serviceaccounts
	I0318 20:51:21.305374   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.305384   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.305390   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.309703   21691 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 20:51:21.309920   21691 default_sa.go:45] found service account: "default"
	I0318 20:51:21.309946   21691 default_sa.go:55] duration metric: took 189.082059ms for default service account to be created ...
	I0318 20:51:21.309958   21691 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 20:51:21.506072   21691 request.go:629] Waited for 196.048872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.506130   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/namespaces/kube-system/pods
	I0318 20:51:21.506136   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.506146   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.506152   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.513839   21691 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 20:51:21.520652   21691 system_pods.go:86] 24 kube-system pods found
	I0318 20:51:21.520682   21691 system_pods.go:89] "coredns-5dd5756b68-fgqzg" [245a67a5-7e01-445d-a741-900dd301c127] Running
	I0318 20:51:21.520691   21691 system_pods.go:89] "coredns-5dd5756b68-hrrzn" [bd22f324-f86b-458f-8443-1fbb4c47521e] Running
	I0318 20:51:21.520697   21691 system_pods.go:89] "etcd-ha-315064" [9cda89d4-982e-4b59-9d41-5318d9927e10] Running
	I0318 20:51:21.520702   21691 system_pods.go:89] "etcd-ha-315064-m02" [330ca3db-e1ba-4ce7-9b37-c3d791f7a3ad] Running
	I0318 20:51:21.520708   21691 system_pods.go:89] "etcd-ha-315064-m03" [e59c305c-3942-4ac0-a78b-7f393410a0c4] Running
	I0318 20:51:21.520713   21691 system_pods.go:89] "kindnet-dvtw7" [88b28235-5259-453e-af33-f2ab8e7e6609] Running
	I0318 20:51:21.520718   21691 system_pods.go:89] "kindnet-tbghx" [9c5ae7df-5e40-42ca-b8e6-d7bbc335e065] Running
	I0318 20:51:21.520724   21691 system_pods.go:89] "kindnet-x8cpw" [19931ea9-b153-46b1-af81-56634a6a1c87] Running
	I0318 20:51:21.520731   21691 system_pods.go:89] "kube-apiserver-ha-315064" [efa72228-3815-4456-89ee-603b73e97ab9] Running
	I0318 20:51:21.520739   21691 system_pods.go:89] "kube-apiserver-ha-315064-m02" [2a466fac-9e4b-4887-8ad3-3f01d594b615] Running
	I0318 20:51:21.520750   21691 system_pods.go:89] "kube-apiserver-ha-315064-m03" [ed0be9ce-fa97-441b-8791-5ee60a9d5382] Running
	I0318 20:51:21.520758   21691 system_pods.go:89] "kube-controller-manager-ha-315064" [2630ed62-b0c8-4cee-899a-9f7d14eabefb] Running
	I0318 20:51:21.520773   21691 system_pods.go:89] "kube-controller-manager-ha-315064-m02" [ba8783c4-bba1-41ee-97d2-62186bd2f96e] Running
	I0318 20:51:21.520780   21691 system_pods.go:89] "kube-controller-manager-ha-315064-m03" [8ad4a754-6e8d-40f5-8348-47dbbf678066] Running
	I0318 20:51:21.520787   21691 system_pods.go:89] "kube-proxy-bccjj" [f0f1ef98-75cf-47cd-a99b-ba443d7df38a] Running
	I0318 20:51:21.520798   21691 system_pods.go:89] "kube-proxy-nf4sq" [4acc350a-a057-4bdb-9d95-ee583b48fe33] Running
	I0318 20:51:21.520806   21691 system_pods.go:89] "kube-proxy-wrm24" [b686bb37-4624-4b09-b335-d292a914e41c] Running
	I0318 20:51:21.520813   21691 system_pods.go:89] "kube-scheduler-ha-315064" [2d7ccbd2-5151-466c-83b1-39bdd17813d1] Running
	I0318 20:51:21.520822   21691 system_pods.go:89] "kube-scheduler-ha-315064-m02" [2a91d68a-c56f-43c9-985b-c0a2d72d56a8] Running
	I0318 20:51:21.520829   21691 system_pods.go:89] "kube-scheduler-ha-315064-m03" [0917880d-4c3d-452b-89b7-567674a24298] Running
	I0318 20:51:21.520835   21691 system_pods.go:89] "kube-vip-ha-315064" [af9ee260-66a6-435a-957c-40b598d3d9ec] Running
	I0318 20:51:21.520840   21691 system_pods.go:89] "kube-vip-ha-315064-m02" [45c22149-503d-49ed-8b45-63f95a8c402b] Running
	I0318 20:51:21.520847   21691 system_pods.go:89] "kube-vip-ha-315064-m03" [0d376644-8c01-4b2f-b3da-337bf602d246] Running
	I0318 20:51:21.520853   21691 system_pods.go:89] "storage-provisioner" [4ddebef9-cc69-4535-8dc5-9117878507d8] Running
	I0318 20:51:21.520864   21691 system_pods.go:126] duration metric: took 210.898433ms to wait for k8s-apps to be running ...
	I0318 20:51:21.520877   21691 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 20:51:21.520942   21691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 20:51:21.536487   21691 system_svc.go:56] duration metric: took 15.602909ms WaitForService to wait for kubelet
	I0318 20:51:21.536510   21691 kubeadm.go:576] duration metric: took 18.499152902s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:51:21.536527   21691 node_conditions.go:102] verifying NodePressure condition ...
	I0318 20:51:21.705793   21691 request.go:629] Waited for 169.199603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.79:8443/api/v1/nodes
	I0318 20:51:21.705867   21691 round_trippers.go:463] GET https://192.168.39.79:8443/api/v1/nodes
	I0318 20:51:21.705879   21691 round_trippers.go:469] Request Headers:
	I0318 20:51:21.705891   21691 round_trippers.go:473]     Accept: application/json, */*
	I0318 20:51:21.705903   21691 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 20:51:21.709680   21691 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 20:51:21.711133   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:51:21.711152   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:51:21.711161   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:51:21.711164   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:51:21.711168   21691 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 20:51:21.711174   21691 node_conditions.go:123] node cpu capacity is 2
	I0318 20:51:21.711177   21691 node_conditions.go:105] duration metric: took 174.644722ms to run NodePressure ...
	I0318 20:51:21.711187   21691 start.go:240] waiting for startup goroutines ...
	I0318 20:51:21.711205   21691 start.go:254] writing updated cluster config ...
	I0318 20:51:21.711465   21691 ssh_runner.go:195] Run: rm -f paused
	I0318 20:51:21.762537   21691 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 20:51:21.764859   21691 out.go:177] * Done! kubectl is now configured to use "ha-315064" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.816184959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795354816160611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=236d69c3-9fb2-4273-bb49-3916b6e102ac name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.817108346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de2b993e-e8db-46f1-850d-60661137148a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.817164797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de2b993e-e8db-46f1-850d-60661137148a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.818478750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de2b993e-e8db-46f1-850d-60661137148a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.868814882Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=262d14eb-56e4-407d-86d2-b0689cb39d81 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.868892999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=262d14eb-56e4-407d-86d2-b0689cb39d81 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.870281981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cce13767-5147-4290-ac5c-71b2922bf101 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.870837843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795354870802306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cce13767-5147-4290-ac5c-71b2922bf101 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.871368221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=084880c9-a7cb-43a0-bf28-90bf0836c550 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.871483960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=084880c9-a7cb-43a0-bf28-90bf0836c550 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.871866284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=084880c9-a7cb-43a0-bf28-90bf0836c550 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.912710791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfd98913-bd1f-4cd5-a523-db275a3b89bc name=/runtime.v1.RuntimeService/Version
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.913084373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfd98913-bd1f-4cd5-a523-db275a3b89bc name=/runtime.v1.RuntimeService/Version
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.914751080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e2210ad-78ce-473a-ba0b-a7537e058ee8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.915251687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795354915226888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e2210ad-78ce-473a-ba0b-a7537e058ee8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.916149078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7ca6389-225e-4c56-be42-e392f43a2849 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.916894764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7ca6389-225e-4c56-be42-e392f43a2849 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.917304196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7ca6389-225e-4c56-be42-e392f43a2849 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.958329697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc764437-9fb7-46d4-9454-8460e4f13997 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.958399314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc764437-9fb7-46d4-9454-8460e4f13997 name=/runtime.v1.RuntimeService/Version
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.959570501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74d8ae1d-7e09-401a-9e27-85c29b2b49e7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.960361955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795354960283826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74d8ae1d-7e09-401a-9e27-85c29b2b49e7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.960839671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0b005d7-7a62-4352-89f2-fa7765bc8f2b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.960901869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0b005d7-7a62-4352-89f2-fa7765bc8f2b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 20:55:54 ha-315064 crio[681]: time="2024-03-18 20:55:54.961229634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795086270877739,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710794986574984128,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710794985566639782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fff81c800b4288ef8749d177de5f1726d2af1be720e1a6e1a0c2b8e0ff10ed2,PodSandboxId:9426401fe1ab31f8198b713e2013f9d71c7aeb3bdccb0b41969eef6afddf9695,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710794843930356439,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7,PodSandboxId:b9df3be0d95884a3d71e847d349251a81a13e837983404bcaf81d6d9748758c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843913371220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":
\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710794843906534490,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]
string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014,PodSandboxId:82cdf7455196021f3853bb2dd622d30dee8a1278e46f5fb19d82b90c0c02b4f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710794841592504126,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710794837867698435,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6e1dea6afc79ba67ab10e5ebf1a855fb49ade8da5cefcd4d1b1e5dbefc84d6,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710794825173561169,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710794818263723408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81,PodSandboxId:73af5e6e2e583a7e29d168405187833dd1664279333c126592cef9455f9ca215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710794818214840878,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710794818183781486,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c,PodSandboxId:7f93400f03a78dc3fcbd62b31f359208d3ee2c560f19c9b5e586f963f19ca6f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710794818123936193,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0b005d7-7a62-4352-89f2-fa7765bc8f2b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	962d0c8af6a9a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   b1e1139d7a57e       busybox-5b5d89c9d6-c7lzc
	3e90a0712d87d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   154ec2a128fe5       kube-vip-ha-315064
	10b2ec1f74690       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   9426401fe1ab3       storage-provisioner
	2fff81c800b42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       0                   9426401fe1ab3       storage-provisioner
	bfac5d0e77417       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      8 minutes ago       Running             coredns                   0                   b9df3be0d9588       coredns-5dd5756b68-fgqzg
	d5c124916621e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      8 minutes ago       Running             coredns                   0                   868a925ed8d8e       coredns-5dd5756b68-hrrzn
	a7126db5f2812       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    8 minutes ago       Running             kindnet-cni               0                   82cdf74551960       kindnet-tbghx
	df303842f5387       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      8 minutes ago       Running             kube-proxy                0                   01b267bb0cc88       kube-proxy-wrm24
	6c6e1dea6afc7       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     8 minutes ago       Exited              kube-vip                  0                   154ec2a128fe5       kube-vip-ha-315064
	1a42f9c834d0e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      8 minutes ago       Running             kube-scheduler            0                   b8f2e721ddf5c       kube-scheduler-ha-315064
	80a67e792a683       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      8 minutes ago       Running             kube-apiserver            0                   73af5e6e2e583       kube-apiserver-ha-315064
	3dfd1d922dc88       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      8 minutes ago       Running             etcd                      0                   2223b5076d0b6       etcd-ha-315064
	4480ab4493cfa       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      8 minutes ago       Running             kube-controller-manager   0                   7f93400f03a78       kube-controller-manager-ha-315064
	
	
	==> coredns [bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7] <==
	[INFO] 10.244.1.2:40332 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001582028s
	[INFO] 10.244.1.2:33788 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001595063s
	[INFO] 10.244.0.4:57531 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110111s
	[INFO] 10.244.0.4:51555 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000080591s
	[INFO] 10.244.2.2:56578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147831s
	[INFO] 10.244.2.2:53449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00029003s
	[INFO] 10.244.2.2:60915 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002720288s
	[INFO] 10.244.2.2:36698 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016504s
	[INFO] 10.244.1.2:42460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174593s
	[INFO] 10.244.1.2:45245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000246387s
	[INFO] 10.244.1.2:41375 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008534s
	[INFO] 10.244.1.2:50419 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000325333s
	[INFO] 10.244.1.2:44785 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147222s
	[INFO] 10.244.0.4:53351 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001837559s
	[INFO] 10.244.0.4:56449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081811s
	[INFO] 10.244.0.4:52543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112771s
	[INFO] 10.244.2.2:45761 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195234s
	[INFO] 10.244.2.2:59241 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000120541s
	[INFO] 10.244.1.2:34891 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210022s
	[INFO] 10.244.1.2:34411 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010526s
	[INFO] 10.244.0.4:35654 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123761s
	[INFO] 10.244.0.4:55976 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121291s
	[INFO] 10.244.2.2:60584 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000199858s
	[INFO] 10.244.2.2:57089 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139333s
	[INFO] 10.244.1.2:47817 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142953s
	
	
	==> coredns [d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea] <==
	[INFO] 10.244.2.2:59025 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167194s
	[INFO] 10.244.2.2:49800 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116262s
	[INFO] 10.244.1.2:34969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001875316s
	[INFO] 10.244.1.2:45722 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148316s
	[INFO] 10.244.1.2:51432 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00155768s
	[INFO] 10.244.0.4:35472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011908s
	[INFO] 10.244.0.4:59665 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225277s
	[INFO] 10.244.0.4:48478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082298s
	[INFO] 10.244.0.4:58488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037583s
	[INFO] 10.244.0.4:52714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122718s
	[INFO] 10.244.2.2:38213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144668s
	[INFO] 10.244.2.2:33237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140758s
	[INFO] 10.244.1.2:55432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156014s
	[INFO] 10.244.1.2:43813 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140774s
	[INFO] 10.244.0.4:56118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008172s
	[INFO] 10.244.0.4:50788 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172997s
	[INFO] 10.244.2.2:59802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176543s
	[INFO] 10.244.2.2:48593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240495s
	[INFO] 10.244.1.2:57527 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153491s
	[INFO] 10.244.1.2:41470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189177s
	[INFO] 10.244.1.2:34055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148936s
	[INFO] 10.244.0.4:58773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274692s
	[INFO] 10.244.0.4:38762 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072594s
	[INFO] 10.244.0.4:34340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059481s
	[INFO] 10.244.0.4:56101 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011093s
	
	
	==> describe nodes <==
	Name:               ha-315064
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T20_47_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:47:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:55:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:51:43 +0000   Mon, 18 Mar 2024 20:47:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-315064
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f9d3eed04b4b99974be1860661f403
	  System UUID:                67f9d3ee-d04b-4b99-974b-e1860661f403
	  Boot ID:                    da42c8d7-0f88-49a8-83c7-2bcbed46eb7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-c7lzc             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 coredns-5dd5756b68-fgqzg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m38s
	  kube-system                 coredns-5dd5756b68-hrrzn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m38s
	  kube-system                 etcd-ha-315064                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m47s
	  kube-system                 kindnet-tbghx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m38s
	  kube-system                 kube-apiserver-ha-315064             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 kube-controller-manager-ha-315064    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 kube-proxy-wrm24                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-scheduler-ha-315064             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 kube-vip-ha-315064                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m36s  kube-proxy       
	  Normal  Starting                 8m48s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m48s  kubelet          Node ha-315064 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m48s  kubelet          Node ha-315064 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m48s  kubelet          Node ha-315064 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m38s  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal  NodeReady                8m32s  kubelet          Node ha-315064 status is now: NodeReady
	  Normal  RegisteredNode           5m52s  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal  RegisteredNode           4m39s  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	
	
	Name:               ha-315064-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_49_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:49:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:52:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 20:51:33 +0000   Mon, 18 Mar 2024 20:53:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-315064-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 84b0eca72c194ee2b4b37351cd8bc63f
	  System UUID:                84b0eca7-2c19-4ee2-b4b3-7351cd8bc63f
	  Boot ID:                    0bb32325-70b1-4a0c-8d83-e3322fb70efd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7z7sj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 etcd-ha-315064-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m20s
	  kube-system                 kindnet-dvtw7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m21s
	  kube-system                 kube-apiserver-ha-315064-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-controller-manager-ha-315064-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-proxy-bccjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-scheduler-ha-315064-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-vip-ha-315064-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m     kube-proxy       
	  Normal  RegisteredNode  5m52s  node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode  4m39s  node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  NodeNotReady    2m44s  node-controller  Node ha-315064-m02 status is now: NodeNotReady
	
	
	Name:               ha-315064-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_51_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:50:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:55:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:50:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:50:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:50:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:51:30 +0000   Mon, 18 Mar 2024 20:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-315064-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf0ce8da0ac342e5b4cd58e80d68360c
	  System UUID:                cf0ce8da-0ac3-42e5-b4cd-58e80d68360c
	  Boot ID:                    d08a8a9e-b8e0-4b9d-a83b-1485ac5ce43c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-5hmqj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 etcd-ha-315064-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kindnet-x8cpw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-ha-315064-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-controller-manager-ha-315064-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-nf4sq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-ha-315064-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-vip-ha-315064-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m50s  kube-proxy       
	  Normal  RegisteredNode  4m53s  node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal  RegisteredNode  4m52s  node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal  RegisteredNode  4m39s  node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	
	
	Name:               ha-315064-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_52_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 20:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 20:52:32 +0000   Mon, 18 Mar 2024 20:52:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-315064-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e505b03139344fc9b8ceffed32c9bea6
	  System UUID:                e505b031-3934-4fc9-b8ce-ffed32c9bea6
	  Boot ID:                    2195ee59-5053-4efb-a904-3189e0b7888f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwjjr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m53s
	  kube-system                 kube-proxy-dhhjx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m53s (x5 over 3m55s)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x5 over 3m55s)  kubelet          Node ha-315064-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x5 over 3m55s)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal  NodeReady                3m44s                  kubelet          Node ha-315064-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar18 20:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051703] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042795] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.565276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.402518] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.662299] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.134199] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.060622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061926] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.170895] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.158641] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.304087] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +5.155955] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.063498] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.791144] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.535740] kauditd_printk_skb: 57 callbacks suppressed
	[Mar18 20:47] kauditd_printk_skb: 35 callbacks suppressed
	[  +2.157125] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[ +10.330891] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.069969] kauditd_printk_skb: 36 callbacks suppressed
	[Mar18 20:49] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d] <==
	{"level":"warn","ts":"2024-03-18T20:55:55.272182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.279303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.286771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.287823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.294361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.298988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.304597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.312928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.321378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.328183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.333847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.339338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.351137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.353828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.354815Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.363399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.372159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.378513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.38314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.389245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.390228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.397414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.404844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.433674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T20:55:55.488246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:55:55 up 9 min,  0 users,  load average: 0.13, 0.26, 0.17
	Linux ha-315064 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014] <==
	I0318 20:55:20.197698       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 20:55:30.212298       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 20:55:30.212356       1 main.go:227] handling current node
	I0318 20:55:30.212377       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 20:55:30.212391       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 20:55:30.212532       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 20:55:30.212542       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 20:55:30.212627       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 20:55:30.212668       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 20:55:40.221532       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 20:55:40.221654       1 main.go:227] handling current node
	I0318 20:55:40.221688       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 20:55:40.221708       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 20:55:40.221869       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 20:55:40.221890       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 20:55:40.221957       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 20:55:40.221976       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 20:55:50.230543       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 20:55:50.230708       1 main.go:227] handling current node
	I0318 20:55:50.230750       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 20:55:50.230783       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 20:55:50.230920       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 20:55:50.230942       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 20:55:50.231009       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 20:55:50.231177       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81] <==
	Trace[296947821]: ---"Write to database call failed" len:2996,err:etcdserver: leader changed 7232ms (20:49:49.545)
	Trace[296947821]: [7.232766718s] [7.232766718s] END
	I0318 20:49:49.605535       1 trace.go:236] Trace[1402986245]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4a7d370c-2ca9-49d2-8803-257dba6db4c6,client:192.168.39.231,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-315064-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (18-Mar-2024 20:49:44.975) (total time: 4630ms):
	Trace[1402986245]: ["GuaranteedUpdate etcd3" audit-id:4a7d370c-2ca9-49d2-8803-257dba6db4c6,key:/minions/ha-315064-m02,type:*core.Node,resource:nodes 4629ms (20:49:44.975)
	Trace[1402986245]:  ---"Txn call completed" 4626ms (20:49:49.605)]
	Trace[1402986245]: ---"Object stored in database" 4628ms (20:49:49.605)
	Trace[1402986245]: [4.63024099s] [4.63024099s] END
	I0318 20:49:49.607436       1 trace.go:236] Trace[1594722573]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4b136828-60e3-45ad-beb9-94863dc9aae1,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-4t3sitwd5gbl3axy65q2vglx6a,user-agent:kube-apiserver/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 20:49:45.144) (total time: 4462ms):
	Trace[1594722573]: ["GuaranteedUpdate etcd3" audit-id:4b136828-60e3-45ad-beb9-94863dc9aae1,key:/leases/kube-system/apiserver-4t3sitwd5gbl3axy65q2vglx6a,type:*coordination.Lease,resource:leases.coordination.k8s.io 4462ms (20:49:45.144)
	Trace[1594722573]:  ---"Txn call completed" 4461ms (20:49:49.607)]
	Trace[1594722573]: [4.462900672s] [4.462900672s] END
	I0318 20:49:49.608002       1 trace.go:236] Trace[1196706199]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f965a1ef-c854-450f-8599-0a2b535aa72d,client:192.168.39.231,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 20:49:43.596) (total time: 6011ms):
	Trace[1196706199]: ["Create etcd3" audit-id:f965a1ef-c854-450f-8599-0a2b535aa72d,key:/events/kube-system/kube-vip-ha-315064-m02.17bdf6f7934b81c5,type:*core.Event,resource:events 6010ms (20:49:43.597)
	Trace[1196706199]:  ---"Txn call succeeded" 6010ms (20:49:49.607)]
	Trace[1196706199]: [6.011226268s] [6.011226268s] END
	I0318 20:49:49.610523       1 trace.go:236] Trace[656951406]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d5d3b0f4-8498-475c-a4ff-62ea8cdd9e02,client:192.168.39.79,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-315064-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (18-Mar-2024 20:49:47.331) (total time: 2278ms):
	Trace[656951406]: ["GuaranteedUpdate etcd3" audit-id:d5d3b0f4-8498-475c-a4ff-62ea8cdd9e02,key:/minions/ha-315064-m02,type:*core.Node,resource:nodes 2278ms (20:49:47.331)
	Trace[656951406]:  ---"Txn call completed" 2274ms (20:49:49.608)]
	Trace[656951406]: ---"About to apply patch" 2275ms (20:49:49.608)
	Trace[656951406]: [2.278832914s] [2.278832914s] END
	I0318 20:49:49.646349       1 trace.go:236] Trace[571068067]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:26da06d3-3f80-46d8-9a47-317bc5453de2,client:192.168.39.231,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 20:49:44.320) (total time: 5326ms):
	Trace[571068067]: [5.326171145s] [5.326171145s] END
	I0318 20:49:49.654763       1 trace.go:236] Trace[462604528]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0802db0a-a1ee-4bbb-ac65-29622b29adc0,client:192.168.39.231,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 20:49:43.317) (total time: 6337ms):
	Trace[462604528]: [6.337589479s] [6.337589479s] END
	W0318 20:52:43.542685       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.79 192.168.39.84]
	
	
	==> kube-controller-manager [4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c] <==
	I0318 20:51:23.327294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="73.263µs"
	I0318 20:51:23.334501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="102.605µs"
	I0318 20:51:23.413872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.890061ms"
	I0318 20:51:23.414173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="145.932µs"
	I0318 20:51:26.640351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.728609ms"
	I0318 20:51:26.640410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="32.692µs"
	I0318 20:51:26.789708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.092066ms"
	I0318 20:51:26.789821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.014µs"
	I0318 20:51:27.032696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="19.893718ms"
	I0318 20:51:27.033415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="110.676µs"
	E0318 20:52:00.689350       1 certificate_controller.go:146] Sync csr-c2rvn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c2rvn": the object has been modified; please apply your changes to the latest version and try again
	E0318 20:52:00.704777       1 certificate_controller.go:146] Sync csr-c2rvn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-c2rvn": the object has been modified; please apply your changes to the latest version and try again
	I0318 20:52:02.460635       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-315064-m04\" does not exist"
	I0318 20:52:02.486150       1 range_allocator.go:380] "Set node PodCIDR" node="ha-315064-m04" podCIDRs=["10.244.3.0/24"]
	I0318 20:52:02.540204       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t4cmt"
	I0318 20:52:02.540361       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dhhjx"
	I0318 20:52:02.801228       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-bl5jr"
	I0318 20:52:02.802628       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-qdg66"
	I0318 20:52:02.810597       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-ssp7z"
	I0318 20:52:07.264188       1 event.go:307] "Event occurred" object="ha-315064-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller"
	I0318 20:52:07.281193       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-315064-m04"
	I0318 20:52:11.378440       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-315064-m04"
	I0318 20:53:11.925932       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-315064-m04"
	I0318 20:53:12.001412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.520303ms"
	I0318 20:53:12.002462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.593µs"
	
	
	==> kube-proxy [df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a] <==
	I0318 20:47:18.215548       1 server_others.go:69] "Using iptables proxy"
	I0318 20:47:18.231786       1 node.go:141] Successfully retrieved node IP: 192.168.39.79
	I0318 20:47:18.348869       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 20:47:18.348893       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 20:47:18.351936       1 server_others.go:152] "Using iptables Proxier"
	I0318 20:47:18.352529       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 20:47:18.352858       1 server.go:846] "Version info" version="v1.28.4"
	I0318 20:47:18.352869       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 20:47:18.361190       1 config.go:188] "Starting service config controller"
	I0318 20:47:18.361678       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 20:47:18.361711       1 config.go:97] "Starting endpoint slice config controller"
	I0318 20:47:18.361715       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 20:47:18.363802       1 config.go:315] "Starting node config controller"
	I0318 20:47:18.363863       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 20:47:18.465585       1 shared_informer.go:318] Caches are synced for service config
	I0318 20:47:18.465657       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 20:47:18.465978       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62] <==
	I0318 20:51:22.845862       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-7z7sj" node="ha-315064-m02"
	E0318 20:51:22.852463       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-c7lzc\": pod busybox-5b5d89c9d6-c7lzc is already assigned to node \"ha-315064\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-c7lzc" node="ha-315064"
	E0318 20:51:22.852530       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b(default/busybox-5b5d89c9d6-c7lzc) wasn't assumed so cannot be forgotten"
	E0318 20:51:22.852559       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-c7lzc\": pod busybox-5b5d89c9d6-c7lzc is already assigned to node \"ha-315064\"" pod="default/busybox-5b5d89c9d6-c7lzc"
	I0318 20:51:22.852583       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-c7lzc" node="ha-315064"
	E0318 20:52:02.626499       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dhhjx\": pod kube-proxy-dhhjx is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dhhjx" node="ha-315064-m04"
	E0318 20:52:02.626993       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod c1714ef0-05aa-46ae-9e20-215a6ce0b13b(kube-system/kube-proxy-dhhjx) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.627298       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dhhjx\": pod kube-proxy-dhhjx is already assigned to node \"ha-315064-m04\"" pod="kube-system/kube-proxy-dhhjx"
	I0318 20:52:02.627457       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dhhjx" node="ha-315064-m04"
	E0318 20:52:02.647637       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t4cmt\": pod kindnet-t4cmt is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-t4cmt" node="ha-315064-m04"
	E0318 20:52:02.647879       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 0f71828b-9b62-43d2-ae99-304677e7535c(kube-system/kindnet-t4cmt) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.648085       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t4cmt\": pod kindnet-t4cmt is already assigned to node \"ha-315064-m04\"" pod="kube-system/kindnet-t4cmt"
	I0318 20:52:02.648204       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t4cmt" node="ha-315064-m04"
	E0318 20:52:02.722386       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ssp7z\": pod kindnet-ssp7z is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ssp7z" node="ha-315064-m04"
	E0318 20:52:02.724006       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod f0ef0560-8258-4d76-b09d-a6f400e388cf(kube-system/kindnet-ssp7z) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.723157       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qdg66\": pod kube-proxy-qdg66 is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qdg66" node="ha-315064-m04"
	E0318 20:52:02.724575       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 28b889b8-8098-4966-8984-abb855c84d0b(kube-system/kube-proxy-qdg66) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.724597       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qdg66\": pod kube-proxy-qdg66 is already assigned to node \"ha-315064-m04\"" pod="kube-system/kube-proxy-qdg66"
	I0318 20:52:02.724613       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qdg66" node="ha-315064-m04"
	E0318 20:52:02.724690       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ssp7z\": pod kindnet-ssp7z is already assigned to node \"ha-315064-m04\"" pod="kube-system/kindnet-ssp7z"
	I0318 20:52:02.724964       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ssp7z" node="ha-315064-m04"
	E0318 20:52:02.749937       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwjjr\": pod kindnet-rwjjr is already assigned to node \"ha-315064-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwjjr" node="ha-315064-m04"
	E0318 20:52:02.750089       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e5f58aa1-891b-47d6-ad96-6896c8500bf5(kube-system/kindnet-rwjjr) wasn't assumed so cannot be forgotten"
	E0318 20:52:02.750127       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwjjr\": pod kindnet-rwjjr is already assigned to node \"ha-315064-m04\"" pod="kube-system/kindnet-rwjjr"
	I0318 20:52:02.750149       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rwjjr" node="ha-315064-m04"
	
	
	==> kubelet <==
	Mar 18 20:51:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:51:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:51:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:51:22 ha-315064 kubelet[1363]: I0318 20:51:22.818888    1363 topology_manager.go:215] "Topology Admit Handler" podUID="3878d9ed-31cf-4a22-9a2e-9866d43fdb8b" podNamespace="default" podName="busybox-5b5d89c9d6-c7lzc"
	Mar 18 20:51:22 ha-315064 kubelet[1363]: I0318 20:51:22.864469    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjmpq\" (UniqueName: \"kubernetes.io/projected/3878d9ed-31cf-4a22-9a2e-9866d43fdb8b-kube-api-access-zjmpq\") pod \"busybox-5b5d89c9d6-c7lzc\" (UID: \"3878d9ed-31cf-4a22-9a2e-9866d43fdb8b\") " pod="default/busybox-5b5d89c9d6-c7lzc"
	Mar 18 20:52:07 ha-315064 kubelet[1363]: E0318 20:52:07.737600    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:52:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:52:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:52:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:52:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:53:07 ha-315064 kubelet[1363]: E0318 20:53:07.736453    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:53:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:53:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:53:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:53:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:54:07 ha-315064 kubelet[1363]: E0318 20:54:07.740706    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:54:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:54:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:54:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:54:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 20:55:07 ha-315064 kubelet[1363]: E0318 20:55:07.736658    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 20:55:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 20:55:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 20:55:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 20:55:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-315064 -n ha-315064
helpers_test.go:261: (dbg) Run:  kubectl --context ha-315064 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (410.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-315064 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-315064 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-315064 -v=7 --alsologtostderr: exit status 82 (2m2.036832586s)

                                                
                                                
-- stdout --
	* Stopping node "ha-315064-m04"  ...
	* Stopping node "ha-315064-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:55:56.945532   27232 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:55:56.945668   27232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:56.945688   27232 out.go:304] Setting ErrFile to fd 2...
	I0318 20:55:56.945698   27232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:55:56.945929   27232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:55:56.946150   27232 out.go:298] Setting JSON to false
	I0318 20:55:56.946250   27232 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:56.946618   27232 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:56.946727   27232 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:55:56.946921   27232 mustload.go:65] Loading cluster: ha-315064
	I0318 20:55:56.947076   27232 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:55:56.947120   27232 stop.go:39] StopHost: ha-315064-m04
	I0318 20:55:56.947505   27232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:56.947564   27232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:56.962298   27232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0318 20:55:56.962768   27232 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:56.963309   27232 main.go:141] libmachine: Using API Version  1
	I0318 20:55:56.963329   27232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:56.963669   27232 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:56.966239   27232 out.go:177] * Stopping node "ha-315064-m04"  ...
	I0318 20:55:56.967728   27232 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 20:55:56.967760   27232 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 20:55:56.968014   27232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 20:55:56.968046   27232 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 20:55:56.970881   27232 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:56.971301   27232 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:51:47 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 20:55:56.971337   27232 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 20:55:56.971452   27232 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 20:55:56.971637   27232 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 20:55:56.971809   27232 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 20:55:56.971955   27232 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 20:55:57.062452   27232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 20:55:57.119255   27232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 20:55:57.175903   27232 main.go:141] libmachine: Stopping "ha-315064-m04"...
	I0318 20:55:57.175942   27232 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:57.177475   27232 main.go:141] libmachine: (ha-315064-m04) Calling .Stop
	I0318 20:55:57.180948   27232 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 0/120
	I0318 20:55:58.509316   27232 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 20:55:58.510796   27232 main.go:141] libmachine: Machine "ha-315064-m04" was stopped.
	I0318 20:55:58.510816   27232 stop.go:75] duration metric: took 1.543090906s to stop
	I0318 20:55:58.510833   27232 stop.go:39] StopHost: ha-315064-m03
	I0318 20:55:58.511113   27232 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:55:58.511152   27232 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:55:58.525341   27232 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I0318 20:55:58.525749   27232 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:55:58.526242   27232 main.go:141] libmachine: Using API Version  1
	I0318 20:55:58.526264   27232 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:55:58.526567   27232 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:55:58.528416   27232 out.go:177] * Stopping node "ha-315064-m03"  ...
	I0318 20:55:58.529546   27232 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 20:55:58.529568   27232 main.go:141] libmachine: (ha-315064-m03) Calling .DriverName
	I0318 20:55:58.529746   27232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 20:55:58.529763   27232 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHHostname
	I0318 20:55:58.532664   27232 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:58.533136   27232 main.go:141] libmachine: (ha-315064-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:ed:fb", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:50:21 +0000 UTC Type:0 Mac:52:54:00:9e:ed:fb Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-315064-m03 Clientid:01:52:54:00:9e:ed:fb}
	I0318 20:55:58.533174   27232 main.go:141] libmachine: (ha-315064-m03) DBG | domain ha-315064-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:9e:ed:fb in network mk-ha-315064
	I0318 20:55:58.533312   27232 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHPort
	I0318 20:55:58.533477   27232 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHKeyPath
	I0318 20:55:58.533601   27232 main.go:141] libmachine: (ha-315064-m03) Calling .GetSSHUsername
	I0318 20:55:58.533706   27232 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m03/id_rsa Username:docker}
	I0318 20:55:58.629292   27232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 20:55:58.687740   27232 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 20:55:58.743434   27232 main.go:141] libmachine: Stopping "ha-315064-m03"...
	I0318 20:55:58.743457   27232 main.go:141] libmachine: (ha-315064-m03) Calling .GetState
	I0318 20:55:58.745068   27232 main.go:141] libmachine: (ha-315064-m03) Calling .Stop
	I0318 20:55:58.748572   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 0/120
	I0318 20:55:59.749823   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 1/120
	I0318 20:56:00.751266   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 2/120
	I0318 20:56:01.752464   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 3/120
	I0318 20:56:02.753908   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 4/120
	I0318 20:56:03.755537   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 5/120
	I0318 20:56:04.756694   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 6/120
	I0318 20:56:05.757983   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 7/120
	I0318 20:56:06.759251   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 8/120
	I0318 20:56:07.760539   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 9/120
	I0318 20:56:08.762132   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 10/120
	I0318 20:56:09.763704   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 11/120
	I0318 20:56:10.765026   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 12/120
	I0318 20:56:11.766491   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 13/120
	I0318 20:56:12.767715   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 14/120
	I0318 20:56:13.769369   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 15/120
	I0318 20:56:14.770686   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 16/120
	I0318 20:56:15.772217   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 17/120
	I0318 20:56:16.773479   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 18/120
	I0318 20:56:17.775431   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 19/120
	I0318 20:56:18.777437   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 20/120
	I0318 20:56:19.778682   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 21/120
	I0318 20:56:20.780271   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 22/120
	I0318 20:56:21.781558   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 23/120
	I0318 20:56:22.783409   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 24/120
	I0318 20:56:23.784958   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 25/120
	I0318 20:56:24.786302   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 26/120
	I0318 20:56:25.787532   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 27/120
	I0318 20:56:26.789197   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 28/120
	I0318 20:56:27.790411   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 29/120
	I0318 20:56:28.791907   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 30/120
	I0318 20:56:29.793207   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 31/120
	I0318 20:56:30.795540   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 32/120
	I0318 20:56:31.796980   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 33/120
	I0318 20:56:32.798589   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 34/120
	I0318 20:56:33.799871   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 35/120
	I0318 20:56:34.801144   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 36/120
	I0318 20:56:35.802331   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 37/120
	I0318 20:56:36.803573   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 38/120
	I0318 20:56:37.804838   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 39/120
	I0318 20:56:38.806679   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 40/120
	I0318 20:56:39.808034   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 41/120
	I0318 20:56:40.809255   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 42/120
	I0318 20:56:41.810541   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 43/120
	I0318 20:56:42.811827   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 44/120
	I0318 20:56:43.813929   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 45/120
	I0318 20:56:44.815035   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 46/120
	I0318 20:56:45.816387   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 47/120
	I0318 20:56:46.817704   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 48/120
	I0318 20:56:47.819067   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 49/120
	I0318 20:56:48.820793   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 50/120
	I0318 20:56:49.822045   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 51/120
	I0318 20:56:50.823270   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 52/120
	I0318 20:56:51.824641   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 53/120
	I0318 20:56:52.825975   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 54/120
	I0318 20:56:53.827695   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 55/120
	I0318 20:56:54.828973   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 56/120
	I0318 20:56:55.830289   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 57/120
	I0318 20:56:56.831796   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 58/120
	I0318 20:56:57.833168   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 59/120
	I0318 20:56:58.834396   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 60/120
	I0318 20:56:59.835687   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 61/120
	I0318 20:57:00.836980   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 62/120
	I0318 20:57:01.838164   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 63/120
	I0318 20:57:02.839447   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 64/120
	I0318 20:57:03.840982   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 65/120
	I0318 20:57:04.842097   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 66/120
	I0318 20:57:05.843321   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 67/120
	I0318 20:57:06.845610   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 68/120
	I0318 20:57:07.846957   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 69/120
	I0318 20:57:08.848264   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 70/120
	I0318 20:57:09.850013   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 71/120
	I0318 20:57:10.851242   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 72/120
	I0318 20:57:11.852596   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 73/120
	I0318 20:57:12.853866   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 74/120
	I0318 20:57:13.855878   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 75/120
	I0318 20:57:14.857906   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 76/120
	I0318 20:57:15.859282   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 77/120
	I0318 20:57:16.860600   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 78/120
	I0318 20:57:17.861804   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 79/120
	I0318 20:57:18.863469   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 80/120
	I0318 20:57:19.864867   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 81/120
	I0318 20:57:20.866138   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 82/120
	I0318 20:57:21.867434   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 83/120
	I0318 20:57:22.868649   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 84/120
	I0318 20:57:23.870404   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 85/120
	I0318 20:57:24.871590   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 86/120
	I0318 20:57:25.872834   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 87/120
	I0318 20:57:26.874151   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 88/120
	I0318 20:57:27.875275   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 89/120
	I0318 20:57:28.876898   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 90/120
	I0318 20:57:29.878145   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 91/120
	I0318 20:57:30.880038   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 92/120
	I0318 20:57:31.881339   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 93/120
	I0318 20:57:32.882531   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 94/120
	I0318 20:57:33.884339   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 95/120
	I0318 20:57:34.885632   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 96/120
	I0318 20:57:35.886849   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 97/120
	I0318 20:57:36.888214   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 98/120
	I0318 20:57:37.889435   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 99/120
	I0318 20:57:38.890888   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 100/120
	I0318 20:57:39.891992   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 101/120
	I0318 20:57:40.893227   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 102/120
	I0318 20:57:41.895199   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 103/120
	I0318 20:57:42.896455   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 104/120
	I0318 20:57:43.897769   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 105/120
	I0318 20:57:44.899086   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 106/120
	I0318 20:57:45.900525   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 107/120
	I0318 20:57:46.901729   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 108/120
	I0318 20:57:47.903000   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 109/120
	I0318 20:57:48.904660   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 110/120
	I0318 20:57:49.906068   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 111/120
	I0318 20:57:50.907713   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 112/120
	I0318 20:57:51.909368   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 113/120
	I0318 20:57:52.910741   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 114/120
	I0318 20:57:53.912483   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 115/120
	I0318 20:57:54.913870   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 116/120
	I0318 20:57:55.915586   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 117/120
	I0318 20:57:56.917051   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 118/120
	I0318 20:57:57.919324   27232 main.go:141] libmachine: (ha-315064-m03) Waiting for machine to stop 119/120
	I0318 20:57:58.920225   27232 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 20:57:58.920274   27232 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 20:57:58.922526   27232 out.go:177] 
	W0318 20:57:58.924244   27232 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 20:57:58.924263   27232 out.go:239] * 
	* 
	W0318 20:57:58.926310   27232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 20:57:58.927924   27232 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-315064 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-315064 --wait=true -v=7 --alsologtostderr
E0318 21:00:14.158328   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:00:23.236549   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 21:01:37.203173   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-315064 --wait=true -v=7 --alsologtostderr: (4m45.779663183s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-315064
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-315064 -n ha-315064
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-315064 logs -n 25: (2.101793742s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m04 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp testdata/cp-test.txt                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064:/home/docker/cp-test_ha-315064-m04_ha-315064.txt                      |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064 sudo cat                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064.txt                                |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03:/home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m03 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-315064 node stop m02 -v=7                                                    | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-315064 node start m02 -v=7                                                   | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:54 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-315064 -v=7                                                          | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:55 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-315064 -v=7                                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:55 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-315064 --wait=true -v=7                                                   | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:57 UTC | 18 Mar 24 21:02 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-315064                                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 21:02 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:57:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:57:58.990415   27593 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:57:58.990580   27593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:57:58.990594   27593 out.go:304] Setting ErrFile to fd 2...
	I0318 20:57:58.990599   27593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:57:58.990816   27593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:57:58.991358   27593 out.go:298] Setting JSON to false
	I0318 20:57:58.992243   27593 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2423,"bootTime":1710793056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:57:58.992305   27593 start.go:139] virtualization: kvm guest
	I0318 20:57:58.994922   27593 out.go:177] * [ha-315064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:57:58.996877   27593 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:57:58.998425   27593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:57:58.996890   27593 notify.go:220] Checking for updates...
	I0318 20:57:59.001368   27593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:57:59.002948   27593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:57:59.004493   27593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:57:59.005940   27593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:57:59.007713   27593 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:57:59.007845   27593 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:57:59.008215   27593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:57:59.008277   27593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:57:59.029144   27593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0318 20:57:59.029616   27593 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:57:59.030232   27593 main.go:141] libmachine: Using API Version  1
	I0318 20:57:59.030258   27593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:57:59.030597   27593 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:57:59.030797   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:57:59.065815   27593 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 20:57:59.067338   27593 start.go:297] selected driver: kvm2
	I0318 20:57:59.067351   27593 start.go:901] validating driver "kvm2" against &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:57:59.067496   27593 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:57:59.067925   27593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:57:59.068004   27593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:57:59.081919   27593 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:57:59.082630   27593 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:57:59.082711   27593 cni.go:84] Creating CNI manager for ""
	I0318 20:57:59.082725   27593 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 20:57:59.082786   27593 start.go:340] cluster config:
	{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:57:59.082951   27593 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:57:59.085701   27593 out.go:177] * Starting "ha-315064" primary control-plane node in "ha-315064" cluster
	I0318 20:57:59.087140   27593 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:57:59.087175   27593 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:57:59.087184   27593 cache.go:56] Caching tarball of preloaded images
	I0318 20:57:59.087253   27593 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:57:59.087263   27593 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:57:59.087379   27593 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:57:59.087556   27593 start.go:360] acquireMachinesLock for ha-315064: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:57:59.087603   27593 start.go:364] duration metric: took 26.79µs to acquireMachinesLock for "ha-315064"
	I0318 20:57:59.087623   27593 start.go:96] Skipping create...Using existing machine configuration
	I0318 20:57:59.087632   27593 fix.go:54] fixHost starting: 
	I0318 20:57:59.087984   27593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:57:59.088028   27593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:57:59.101479   27593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0318 20:57:59.101899   27593 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:57:59.103507   27593 main.go:141] libmachine: Using API Version  1
	I0318 20:57:59.103531   27593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:57:59.103856   27593 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:57:59.104099   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:57:59.104251   27593 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:57:59.105740   27593 fix.go:112] recreateIfNeeded on ha-315064: state=Running err=<nil>
	W0318 20:57:59.105769   27593 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 20:57:59.107865   27593 out.go:177] * Updating the running kvm2 "ha-315064" VM ...
	I0318 20:57:59.109205   27593 machine.go:94] provisionDockerMachine start ...
	I0318 20:57:59.109223   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:57:59.109400   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.111955   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.112392   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.112418   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.112534   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.112701   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.112848   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.112983   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.113136   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:57:59.113300   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:57:59.113311   27593 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 20:57:59.226616   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064
	
	I0318 20:57:59.226651   27593 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:57:59.226896   27593 buildroot.go:166] provisioning hostname "ha-315064"
	I0318 20:57:59.226922   27593 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:57:59.227141   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.229699   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.230124   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.230162   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.230303   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.230503   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.230661   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.230794   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.230999   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:57:59.231202   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:57:59.231217   27593 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064 && echo "ha-315064" | sudo tee /etc/hostname
	I0318 20:57:59.366741   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064
	
	I0318 20:57:59.366770   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.369595   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.369969   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.369996   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.370225   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.370385   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.370540   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.370724   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.370893   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:57:59.371095   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:57:59.371116   27593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:57:59.482665   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:57:59.482693   27593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:57:59.482712   27593 buildroot.go:174] setting up certificates
	I0318 20:57:59.482721   27593 provision.go:84] configureAuth start
	I0318 20:57:59.482729   27593 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:57:59.483002   27593 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:57:59.486002   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.486527   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.486561   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.486753   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.488827   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.489224   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.489252   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.489358   27593 provision.go:143] copyHostCerts
	I0318 20:57:59.489392   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:57:59.489444   27593 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:57:59.489456   27593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:57:59.489542   27593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:57:59.489652   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:57:59.489677   27593 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:57:59.489683   27593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:57:59.489724   27593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:57:59.489801   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:57:59.489836   27593 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:57:59.489849   27593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:57:59.489892   27593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:57:59.489977   27593 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064 san=[127.0.0.1 192.168.39.79 ha-315064 localhost minikube]
	I0318 20:57:59.889061   27593 provision.go:177] copyRemoteCerts
	I0318 20:57:59.889115   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:57:59.889137   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.891532   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.891941   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.891968   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.892121   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.892304   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.892508   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.892664   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:57:59.976447   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:57:59.976538   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:58:00.006401   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:58:00.006481   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 20:58:00.037850   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:58:00.037919   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 20:58:00.065813   27593 provision.go:87] duration metric: took 583.079708ms to configureAuth
	I0318 20:58:00.065842   27593 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:58:00.066124   27593 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:58:00.066212   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:58:00.068756   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:58:00.069195   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:58:00.069222   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:58:00.069352   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:58:00.069540   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:58:00.069729   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:58:00.069908   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:58:00.070095   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:58:00.070247   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:58:00.070262   27593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:59:31.052953   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:59:31.052982   27593 machine.go:97] duration metric: took 1m31.943763396s to provisionDockerMachine
	I0318 20:59:31.052996   27593 start.go:293] postStartSetup for "ha-315064" (driver="kvm2")
	I0318 20:59:31.053028   27593 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:59:31.053049   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.053414   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:59:31.053450   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.056467   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.056990   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.057017   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.057163   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.057320   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.057457   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.057589   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:59:31.140700   27593 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:59:31.145337   27593 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:59:31.145360   27593 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:59:31.145412   27593 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:59:31.145486   27593 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:59:31.145497   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:59:31.145574   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:59:31.155749   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:59:31.183080   27593 start.go:296] duration metric: took 130.073154ms for postStartSetup
	I0318 20:59:31.183124   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.183414   27593 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0318 20:59:31.183438   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.186031   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.186422   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.186463   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.186619   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.186818   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.186958   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.187098   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	W0318 20:59:31.267547   27593 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0318 20:59:31.267568   27593 fix.go:56] duration metric: took 1m32.179937374s for fixHost
	I0318 20:59:31.267588   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.270318   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.270790   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.270821   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.270939   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.271126   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.271301   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.271408   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.271564   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:59:31.271763   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:59:31.271780   27593 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:59:31.374041   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710795571.345894013
	
	I0318 20:59:31.374063   27593 fix.go:216] guest clock: 1710795571.345894013
	I0318 20:59:31.374069   27593 fix.go:229] Guest: 2024-03-18 20:59:31.345894013 +0000 UTC Remote: 2024-03-18 20:59:31.267574664 +0000 UTC m=+92.328918413 (delta=78.319349ms)
	I0318 20:59:31.374086   27593 fix.go:200] guest clock delta is within tolerance: 78.319349ms
	I0318 20:59:31.374091   27593 start.go:83] releasing machines lock for "ha-315064", held for 1m32.286477281s
	I0318 20:59:31.374107   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.374385   27593 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:59:31.377019   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.377404   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.377424   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.377584   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.378138   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.378314   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.378398   27593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:59:31.378447   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.378529   27593 ssh_runner.go:195] Run: cat /version.json
	I0318 20:59:31.378548   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.380830   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381262   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.381285   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381299   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381463   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.381626   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.381779   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.381811   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381890   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.381938   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.382037   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:59:31.382076   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.382209   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.382357   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:59:31.459059   27593 ssh_runner.go:195] Run: systemctl --version
	I0318 20:59:31.482910   27593 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:59:31.653491   27593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:59:31.661490   27593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:59:31.661569   27593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:59:31.672371   27593 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 20:59:31.672392   27593 start.go:494] detecting cgroup driver to use...
	I0318 20:59:31.672445   27593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:59:31.690263   27593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:59:31.705517   27593 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:59:31.705580   27593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:59:31.720644   27593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:59:31.735117   27593 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:59:31.897780   27593 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:59:32.107296   27593 docker.go:233] disabling docker service ...
	I0318 20:59:32.107352   27593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:59:32.159372   27593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:59:32.175739   27593 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:59:32.394495   27593 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:59:32.572710   27593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:59:32.589737   27593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:59:32.610355   27593 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:59:32.610429   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.623702   27593 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:59:32.623761   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.636402   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.649702   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.661655   27593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:59:32.675530   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.688349   27593 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.700484   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.712473   27593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:59:32.723198   27593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:59:32.734314   27593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:59:32.901886   27593 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:59:42.697121   27593 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.79519496s)
	I0318 20:59:42.697151   27593 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:59:42.697208   27593 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:59:42.702795   27593 start.go:562] Will wait 60s for crictl version
	I0318 20:59:42.702834   27593 ssh_runner.go:195] Run: which crictl
	I0318 20:59:42.707268   27593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:59:42.751977   27593 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:59:42.752054   27593 ssh_runner.go:195] Run: crio --version
	I0318 20:59:42.782120   27593 ssh_runner.go:195] Run: crio --version
	I0318 20:59:42.813735   27593 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:59:42.815251   27593 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:59:42.817837   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:42.818216   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:42.818245   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:42.818470   27593 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:59:42.823704   27593 kubeadm.go:877] updating cluster {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 20:59:42.823847   27593 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:59:42.823895   27593 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:59:42.873949   27593 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:59:42.873965   27593 crio.go:433] Images already preloaded, skipping extraction
	I0318 20:59:42.874012   27593 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:59:42.911568   27593 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:59:42.911585   27593 cache_images.go:84] Images are preloaded, skipping loading
	I0318 20:59:42.911593   27593 kubeadm.go:928] updating node { 192.168.39.79 8443 v1.28.4 crio true true} ...
	I0318 20:59:42.911712   27593 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:59:42.911782   27593 ssh_runner.go:195] Run: crio config
	I0318 20:59:42.965674   27593 cni.go:84] Creating CNI manager for ""
	I0318 20:59:42.965693   27593 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 20:59:42.965703   27593 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 20:59:42.965721   27593 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-315064 NodeName:ha-315064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 20:59:42.965855   27593 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-315064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 20:59:42.965873   27593 kube-vip.go:111] generating kube-vip config ...
	I0318 20:59:42.965911   27593 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:59:42.978464   27593 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:59:42.978583   27593 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:59:42.978633   27593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:59:42.988408   27593 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 20:59:42.988457   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 20:59:42.998179   27593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 20:59:43.017148   27593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:59:43.034816   27593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 20:59:43.052943   27593 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:59:43.071892   27593 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:59:43.075943   27593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:59:43.226499   27593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:59:43.241719   27593 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.79
	I0318 20:59:43.241736   27593 certs.go:194] generating shared ca certs ...
	I0318 20:59:43.241750   27593 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:59:43.241913   27593 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:59:43.241961   27593 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:59:43.241972   27593 certs.go:256] generating profile certs ...
	I0318 20:59:43.242051   27593 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:59:43.242078   27593 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e
	I0318 20:59:43.242091   27593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.231 192.168.39.84 192.168.39.254]
	I0318 20:59:43.325257   27593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e ...
	I0318 20:59:43.325284   27593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e: {Name:mk2a080b7e875c8dea1076aff4dbd4e65753639d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:59:43.325470   27593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e ...
	I0318 20:59:43.325486   27593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e: {Name:mkec9fd0aa1c43e53fb19a42378c875d41da6f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:59:43.325579   27593 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:59:43.325718   27593 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:59:43.325845   27593 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:59:43.325861   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:59:43.325873   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:59:43.325886   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:59:43.325899   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:59:43.325911   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:59:43.325923   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:59:43.325938   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:59:43.325955   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:59:43.326006   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:59:43.326035   27593 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:59:43.326044   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:59:43.326067   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:59:43.326087   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:59:43.326107   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:59:43.326141   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:59:43.326168   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.326182   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.326194   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.326798   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:59:43.356074   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:59:43.382454   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:59:43.409819   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:59:43.437134   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 20:59:43.465376   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:59:43.493142   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:59:43.519214   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:59:43.546113   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:59:43.572049   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:59:43.598109   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:59:43.624327   27593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 20:59:43.642379   27593 ssh_runner.go:195] Run: openssl version
	I0318 20:59:43.648476   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:59:43.659629   27593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.664670   27593 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.664734   27593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.670737   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:59:43.680215   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:59:43.691328   27593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.696040   27593 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.696076   27593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.702113   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:59:43.711463   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:59:43.722633   27593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.727184   27593 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.727227   27593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.733139   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:59:43.742546   27593 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:59:43.747389   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 20:59:43.753196   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 20:59:43.759065   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 20:59:43.764939   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 20:59:43.770652   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 20:59:43.776479   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 20:59:43.782806   27593 kubeadm.go:391] StartCluster: {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:59:43.782974   27593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 20:59:43.783035   27593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 20:59:43.828734   27593 cri.go:89] found id: "69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f"
	I0318 20:59:43.828757   27593 cri.go:89] found id: "b00d88c41f5b8f0909774cc7a06c87525bad093fc810f8b1ef0c10dc9d8d804f"
	I0318 20:59:43.828762   27593 cri.go:89] found id: "287440cfd8515950c684aba8aaa59b80068653836e2e952239977bb4dbbd4607"
	I0318 20:59:43.828766   27593 cri.go:89] found id: "6260c164c8ab141652f895ba2381853cc2b2d40476c56b5c34119d998c0458e3"
	I0318 20:59:43.828770   27593 cri.go:89] found id: "72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62"
	I0318 20:59:43.828778   27593 cri.go:89] found id: "3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02"
	I0318 20:59:43.828780   27593 cri.go:89] found id: "10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375"
	I0318 20:59:43.828783   27593 cri.go:89] found id: "bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7"
	I0318 20:59:43.828785   27593 cri.go:89] found id: "d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea"
	I0318 20:59:43.828790   27593 cri.go:89] found id: "a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014"
	I0318 20:59:43.828796   27593 cri.go:89] found id: "df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a"
	I0318 20:59:43.828799   27593 cri.go:89] found id: "1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62"
	I0318 20:59:43.828806   27593 cri.go:89] found id: "80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81"
	I0318 20:59:43.828809   27593 cri.go:89] found id: "3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d"
	I0318 20:59:43.828818   27593 cri.go:89] found id: "4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c"
	I0318 20:59:43.828825   27593 cri.go:89] found id: ""
	I0318 20:59:43.828871   27593 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.579547807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e44b0ea0-9983-4221-aae1-efa67274c781 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.580443331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c0c3146-be57-4c17-a87a-3e01bc2612f0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.580901859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795765580879085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c0c3146-be57-4c17-a87a-3e01bc2612f0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.581553833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eafed8a8-d246-49df-a8c7-a6332855772b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.581638670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eafed8a8-d246-49df-a8c7-a6332855772b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.583366087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eafed8a8-d246-49df-a8c7-a6332855772b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.642976454Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c62d8c6-d872-4be4-b5e6-ba882bc55adc name=/runtime.v1.RuntimeService/Version
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.643144752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c62d8c6-d872-4be4-b5e6-ba882bc55adc name=/runtime.v1.RuntimeService/Version
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.645143270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c0960b0-363d-4942-9e0c-d7ba500f5c05 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.645809353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795765645780575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c0960b0-363d-4942-9e0c-d7ba500f5c05 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.646652559Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf2d2ede-5c30-4ef5-aa82-e4d6205eb680 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.646707039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf2d2ede-5c30-4ef5-aa82-e4d6205eb680 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.647293776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf2d2ede-5c30-4ef5-aa82-e4d6205eb680 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.709136046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aecd137-1611-4c09-983e-b33d9ca86eb2 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.709249221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aecd137-1611-4c09-983e-b33d9ca86eb2 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.710675280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb89188e-ed7f-4870-931a-84ad4359b2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.711428292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795765711400678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb89188e-ed7f-4870-931a-84ad4359b2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.713543911Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3611a4a9-55f4-4f77-8710-b674cd950f2e name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.713847847Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-c7lzc,Uid:3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795623922915453,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T20:51:22.818509500Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-315064,Uid:95034e2848fe757395e864ee468c38aa,Namespace:kube-system,Attempt:1,},State:SANDBOX
_READY,CreatedAt:1710795590363931658,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{kubernetes.io/config.hash: 95034e2848fe757395e864ee468c38aa,kubernetes.io/config.seen: 2024-03-18T20:47:07.676587976Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-fgqzg,Uid:245a67a5-7e01-445d-a741-900dd301c127,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710795590285789823,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024
-03-18T20:47:23.299111863Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hrrzn,Uid:bd22f324-f86b-458f-8443-1fbb4c47521e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590211999616,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T20:47:23.291215815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&PodSandboxMetadata{Name:etcd-ha-315064,Uid:455fc330bc32275f51604045163662be,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590183345655,Labels:map[string]string{compo
nent: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.79:2379,kubernetes.io/config.hash: 455fc330bc32275f51604045163662be,kubernetes.io/config.seen: 2024-03-18T20:47:07.676580927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&PodSandboxMetadata{Name:kube-proxy-wrm24,Uid:b686bb37-4624-4b09-b335-d292a914e41c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590178261100,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,k8s-app: kube-proxy,pod-template-generation: 1,}
,Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T20:47:17.313012831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&PodSandboxMetadata{Name:kindnet-tbghx,Uid:9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590172730797,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T20:47:17.292225724Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-315064,Uid:b6c104d584739b45afeee644d28478c9,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590163791027,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b6c104d584739b45afeee644d28478c9,kubernetes.io/config.seen: 2024-03-18T20:47:07.676587322Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-315064,Uid:011b56247b514cfea4dc3b2076428e51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590151901366,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b5624
7b514cfea4dc3b2076428e51,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 011b56247b514cfea4dc3b2076428e51,kubernetes.io/config.seen: 2024-03-18T20:47:07.676586004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4ddebef9-cc69-4535-8dc5-9117878507d8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590130734169,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-
test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T20:47:23.302524785Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-315064,Uid:9524c4b1818864ef82847de110d9d59a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710795590068808869,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.79:8443,kubernetes.io/config.hash: 9524c4b1818864ef82847de110d9d59a,kubernetes.io/config.seen: 2024-03-18T20:47:07.676584814Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3611a4a9-55f4-4f77-8710-b674cd950f2e name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.714646966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba28738f-270c-4a6c-a3cc-10e56d2a5c20 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.714734978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba28738f-270c-4a6c-a3cc-10e56d2a5c20 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.714981564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",
\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7
e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc33
0bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba28738f-270c-4a6c-a3cc-10e56d2a5c20 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.716530853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76172259-9781-4374-91af-b25cda1b3fe2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.716607063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76172259-9781-4374-91af-b25cda1b3fe2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:02:45 ha-315064 crio[4010]: time="2024-03-18 21:02:45.717138943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76172259-9781-4374-91af-b25cda1b3fe2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	037f74b5576e6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   3                   88ecf2864f169       kube-controller-manager-ha-315064
	925e697415c9d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   5bfe7f29d4520       kindnet-tbghx
	0a5cb2d80a3b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       4                   700ca15f1576d       storage-provisioner
	0d936680575ab       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Running             kube-apiserver            3                   1acef47bba5b0       kube-apiserver-ha-315064
	8f1d00cae4037       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   df412c1c15d89       busybox-5b5d89c9d6-c7lzc
	41aa0b241e9bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   2                   88ecf2864f169       kube-controller-manager-ha-315064
	0e6b228e5b035       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   2                   b580e6e0ea500       coredns-5dd5756b68-fgqzg
	0043c9349ca8f       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   f8522b2cee4ff       kube-vip-ha-315064
	1a2f6a0548a18       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   e57c024b5c8e1       coredns-5dd5756b68-hrrzn
	d26255f506377       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   5bfe7f29d4520       kindnet-tbghx
	93d601359a854       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   54ab309e4736c       kube-proxy-wrm24
	d7de86ecadf35       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   3457a60ae7eb5       kube-scheduler-ha-315064
	827286fc4f58d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   a78045a2613b0       etcd-ha-315064
	7dcca6592d242       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   700ca15f1576d       storage-provisioner
	cf4b0f5d3ae02       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   1acef47bba5b0       kube-apiserver-ha-315064
	69b53dacdf2d5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago        Exited              coredns                   1                   9333c93c0593d       coredns-5dd5756b68-fgqzg
	72dc2ec14492d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago        Exited              kube-vip                  2                   154ec2a128fe5       kube-vip-ha-315064
	962d0c8af6a9a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   b1e1139d7a57e       busybox-5b5d89c9d6-c7lzc
	d5c124916621e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      15 minutes ago       Exited              coredns                   0                   868a925ed8d8e       coredns-5dd5756b68-hrrzn
	df303842f5387       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      15 minutes ago       Exited              kube-proxy                0                   01b267bb0cc88       kube-proxy-wrm24
	1a42f9c834d0e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      15 minutes ago       Exited              kube-scheduler            0                   b8f2e721ddf5c       kube-scheduler-ha-315064
	3dfd1d922dc88       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      15 minutes ago       Exited              etcd                      0                   2223b5076d0b6       etcd-ha-315064
	
	
	==> coredns [0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39510 - 17999 "HINFO IN 8959287172446377104.761263948973536297. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020278603s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:58396->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58803 - 20913 "HINFO IN 7029768413915660847.7035293484429296979. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015204357s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35281 - 44397 "HINFO IN 3605527226997148585.3932514430432415525. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021358967s
	
	
	==> coredns [d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea] <==
	[INFO] 10.244.0.4:35472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011908s
	[INFO] 10.244.0.4:59665 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225277s
	[INFO] 10.244.0.4:48478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082298s
	[INFO] 10.244.0.4:58488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037583s
	[INFO] 10.244.0.4:52714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122718s
	[INFO] 10.244.2.2:38213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144668s
	[INFO] 10.244.2.2:33237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140758s
	[INFO] 10.244.1.2:55432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156014s
	[INFO] 10.244.1.2:43813 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140774s
	[INFO] 10.244.0.4:56118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008172s
	[INFO] 10.244.0.4:50788 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172997s
	[INFO] 10.244.2.2:59802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176543s
	[INFO] 10.244.2.2:48593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240495s
	[INFO] 10.244.1.2:57527 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153491s
	[INFO] 10.244.1.2:41470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189177s
	[INFO] 10.244.1.2:34055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148936s
	[INFO] 10.244.0.4:58773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274692s
	[INFO] 10.244.0.4:38762 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072594s
	[INFO] 10.244.0.4:34340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059481s
	[INFO] 10.244.0.4:56101 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011093s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-315064
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T20_47_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:47:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:02:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-315064
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f9d3eed04b4b99974be1860661f403
	  System UUID:                67f9d3ee-d04b-4b99-974b-e1860661f403
	  Boot ID:                    da42c8d7-0f88-49a8-83c7-2bcbed46eb7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-c7lzc             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-fgqzg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-hrrzn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-315064                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-tbghx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-315064             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-315064    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-wrm24                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-315064             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-315064                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 2m13s                  kube-proxy       
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-315064 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-315064 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-315064 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-315064 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Warning  ContainerGCFailed        3m39s (x2 over 4m39s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           119s                   node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   RegisteredNode           23s                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	
	
	Name:               ha-315064-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_49_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:49:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:02:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-315064-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 84b0eca72c194ee2b4b37351cd8bc63f
	  System UUID:                84b0eca7-2c19-4ee2-b4b3-7351cd8bc63f
	  Boot ID:                    c2d21a9e-046d-4f00-8ea7-ede4fd23ed3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7z7sj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-315064-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-dvtw7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-315064-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-315064-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-bccjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-315064-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-315064-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 106s                   kube-proxy       
	  Normal  RegisteredNode           12m                    node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  NodeNotReady             9m35s                  node-controller  Node ha-315064-m02 status is now: NodeNotReady
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node ha-315064-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node ha-315064-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m37s (x7 over 2m37s)  kubelet          Node ha-315064-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           119s                   node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode           23s                    node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	
	
	Name:               ha-315064-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_51_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:50:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:02:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:02:17 +0000   Mon, 18 Mar 2024 21:01:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:02:17 +0000   Mon, 18 Mar 2024 21:01:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:02:17 +0000   Mon, 18 Mar 2024 21:01:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:02:17 +0000   Mon, 18 Mar 2024 21:01:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-315064-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf0ce8da0ac342e5b4cd58e80d68360c
	  System UUID:                cf0ce8da-0ac3-42e5-b4cd-58e80d68360c
	  Boot ID:                    f6aee497-2e7d-4e05-9555-dfe4908f6464
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-5hmqj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-315064-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-x8cpw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-315064-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-315064-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-nf4sq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-315064-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-315064-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 38s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal   RegisteredNode           119s               node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	  Normal   NodeNotReady             79s                node-controller  Node ha-315064-m03 status is now: NodeNotReady
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s (x3 over 59s)  kubelet          Node ha-315064-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x3 over 59s)  kubelet          Node ha-315064-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x3 over 59s)  kubelet          Node ha-315064-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s (x2 over 59s)  kubelet          Node ha-315064-m03 has been rebooted, boot id: f6aee497-2e7d-4e05-9555-dfe4908f6464
	  Normal   NodeReady                59s (x2 over 59s)  kubelet          Node ha-315064-m03 status is now: NodeReady
	  Normal   RegisteredNode           23s                node-controller  Node ha-315064-m03 event: Registered Node ha-315064-m03 in Controller
	
	
	Name:               ha-315064-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_52_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:52:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:02:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:02:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:02:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:02:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:02:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-315064-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e505b03139344fc9b8ceffed32c9bea6
	  System UUID:                e505b031-3934-4fc9-b8ce-ffed32c9bea6
	  Boot ID:                    ed5e098f-3395-44b1-a126-a6378a97cc9b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwjjr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-dhhjx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-315064-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-315064-m04 status is now: NodeReady
	  Normal   RegisteredNode           119s               node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   NodeNotReady             79s                node-controller  Node ha-315064-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           23s                node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 9s)    kubelet          Node ha-315064-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 9s)    kubelet          Node ha-315064-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 9s)    kubelet          Node ha-315064-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 9s)    kubelet          Node ha-315064-m04 has been rebooted, boot id: ed5e098f-3395-44b1-a126-a6378a97cc9b
	  Normal   NodeReady                8s (x2 over 9s)    kubelet          Node ha-315064-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +8.134199] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.060622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061926] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.170895] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.158641] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.304087] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +5.155955] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.063498] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.791144] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.535740] kauditd_printk_skb: 57 callbacks suppressed
	[Mar18 20:47] kauditd_printk_skb: 35 callbacks suppressed
	[  +2.157125] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[ +10.330891] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.069969] kauditd_printk_skb: 36 callbacks suppressed
	[Mar18 20:49] kauditd_printk_skb: 28 callbacks suppressed
	[Mar18 20:59] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.172357] systemd-fstab-generator[3830]: Ignoring "noauto" option for root device
	[  +0.274681] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.229244] systemd-fstab-generator[3965]: Ignoring "noauto" option for root device
	[  +0.330679] systemd-fstab-generator[3994]: Ignoring "noauto" option for root device
	[ +10.329786] systemd-fstab-generator[4120]: Ignoring "noauto" option for root device
	[  +0.087000] kauditd_printk_skb: 110 callbacks suppressed
	[  +6.921360] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 21:00] kauditd_printk_skb: 95 callbacks suppressed
	[ +29.533604] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d] <==
	WARNING: 2024/03/18 20:58:00 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T20:58:00.230328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:57:52.19856Z","time spent":"8.031753169s","remote":"127.0.0.1:47508","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	WARNING: 2024/03/18 20:58:00 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T20:58:00.245455Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"882bbdde445c6a1a","rtt":"4.075318ms","error":"dial tcp 192.168.39.231:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-18T20:58:00.245631Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"882bbdde445c6a1a","rtt":"12.060143ms","error":"dial tcp 192.168.39.231:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-18T20:58:00.33522Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.79:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T20:58:00.335285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.79:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T20:58:00.335427Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a91a1bbc2c758cdc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-18T20:58:00.335562Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335693Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335747Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335853Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335945Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335982Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.336097Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336164Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336203Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.33626Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336342Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336397Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336428Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.33954Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-03-18T20:58:00.339698Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-03-18T20:58:00.339742Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-315064","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.79:2380"],"advertise-client-urls":["https://192.168.39.79:2379"]}
	
	
	==> etcd [827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3] <==
	{"level":"warn","ts":"2024-03-18T21:01:51.76411Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9d877c1a39931b2","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:51.764219Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9d877c1a39931b2","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:53.332769Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"e9d877c1a39931b2","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:53.33289Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"e9d877c1a39931b2","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:56.765199Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9d877c1a39931b2","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:56.765257Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9d877c1a39931b2","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:57.334756Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"e9d877c1a39931b2","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:01:57.334862Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"e9d877c1a39931b2","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:02:00.051405Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.906719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-315064-m03\" ","response":"range_response_count:1 size:5452"}
	{"level":"info","ts":"2024-03-18T21:02:00.051614Z","caller":"traceutil/trace.go:171","msg":"trace[1237778584] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-315064-m03; range_end:; response_count:1; response_revision:2495; }","duration":"126.166139ms","start":"2024-03-18T21:01:59.925423Z","end":"2024-03-18T21:02:00.051589Z","steps":["trace[1237778584] 'range keys from in-memory index tree'  (duration: 124.766699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:02:01.336693Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.84:2380/version","remote-member-id":"e9d877c1a39931b2","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:02:01.336764Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"e9d877c1a39931b2","error":"Get \"https://192.168.39.84:2380/version\": dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:02:01.765827Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9d877c1a39931b2","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:02:01.765949Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9d877c1a39931b2","rtt":"0s","error":"dial tcp 192.168.39.84:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T21:02:01.901344Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e9d877c1a39931b2","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"11.328613ms"}
	{"level":"warn","ts":"2024-03-18T21:02:01.901503Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"882bbdde445c6a1a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"11.491489ms"}
	{"level":"info","ts":"2024-03-18T21:02:02.888347Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:02.888443Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:02.888566Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:02.905613Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"e9d877c1a39931b2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T21:02:02.905732Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:02.90587Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"e9d877c1a39931b2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T21:02:02.906109Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:06.393864Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e9d877c1a39931b2","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"3.606015ms"}
	{"level":"warn","ts":"2024-03-18T21:02:06.393957Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"882bbdde445c6a1a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"3.704359ms"}
	
	
	==> kernel <==
	 21:02:46 up 16 min,  0 users,  load average: 0.60, 0.68, 0.42
	Linux ha-315064 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9] <==
	I0318 21:02:11.918819       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:02:21.939454       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:02:21.939504       1 main.go:227] handling current node
	I0318 21:02:21.939529       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:02:21.939534       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:02:21.939674       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 21:02:21.939679       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 21:02:21.939741       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:02:21.939772       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:02:31.957928       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:02:31.958085       1 main.go:227] handling current node
	I0318 21:02:31.958106       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:02:31.958116       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:02:31.958378       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 21:02:31.958424       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 21:02:31.958524       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:02:31.958567       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:02:41.965190       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:02:41.965433       1 main.go:227] handling current node
	I0318 21:02:41.965513       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:02:41.965540       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:02:41.965690       1 main.go:223] Handling node with IPs: map[192.168.39.84:{}]
	I0318 21:02:41.965712       1 main.go:250] Node ha-315064-m03 has CIDR [10.244.2.0/24] 
	I0318 21:02:41.965777       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:02:41.965796       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3] <==
	I0318 20:59:51.606964       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 21:00:01.993315       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0318 21:00:11.994412       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0318 21:00:13.280594       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 21:00:15.282152       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 21:00:19.425385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493] <==
	I0318 21:00:34.952567       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 21:00:34.952602       1 naming_controller.go:291] Starting NamingConditionController
	I0318 21:00:34.952624       1 establishing_controller.go:76] Starting EstablishingController
	I0318 21:00:34.952677       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 21:00:34.952693       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 21:00:34.952736       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 21:00:35.007965       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 21:00:35.008355       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 21:00:35.009225       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 21:00:35.012823       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 21:00:35.012974       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 21:00:35.016089       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 21:00:35.018006       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 21:00:35.018108       1 aggregator.go:166] initial CRD sync complete...
	I0318 21:00:35.018128       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 21:00:35.018134       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 21:00:35.018139       1 cache.go:39] Caches are synced for autoregister controller
	I0318 21:00:35.018452       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	W0318 21:00:35.035179       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.84]
	I0318 21:00:35.039243       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 21:00:35.040635       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 21:00:35.051458       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0318 21:00:35.057864       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0318 21:00:35.914572       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 21:00:36.676732       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.79 192.168.39.84]
	
	
	==> kube-apiserver [cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b] <==
	I0318 20:59:51.577219       1 options.go:220] external host was not specified, using 192.168.39.79
	I0318 20:59:51.585003       1 server.go:148] Version: v1.28.4
	I0318 20:59:51.587467       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 20:59:52.312566       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 20:59:52.328645       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 20:59:52.328662       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 20:59:52.328968       1 instance.go:298] Using reconciler: lease
	W0318 21:00:12.308944       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0318 21:00:12.311289       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0318 21:00:12.332630       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0318 21:00:12.332672       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c] <==
	I0318 21:01:10.840997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="279.03µs"
	I0318 21:01:10.843273       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 21:01:10.844379       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 21:01:10.846897       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 21:01:10.849311       1 shared_informer.go:318] Caches are synced for PV protection
	I0318 21:01:10.851435       1 shared_informer.go:318] Caches are synced for namespace
	I0318 21:01:10.871767       1 shared_informer.go:318] Caches are synced for daemon sets
	I0318 21:01:10.873830       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 21:01:10.880215       1 shared_informer.go:318] Caches are synced for deployment
	I0318 21:01:10.895232       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0318 21:01:10.926231       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 21:01:10.929811       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 21:01:10.951672       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 21:01:10.966128       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 21:01:10.977110       1 shared_informer.go:318] Caches are synced for persistent volume
	I0318 21:01:11.392886       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 21:01:11.444293       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 21:01:11.444372       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 21:01:27.873368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="89.701795ms"
	I0318 21:01:27.873667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="149.417µs"
	I0318 21:01:47.855761       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-5hmqj" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-5hmqj"
	I0318 21:01:48.010146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="185.685µs"
	I0318 21:02:10.417372       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.138188ms"
	I0318 21:02:10.417500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="46.136µs"
	I0318 21:02:38.036974       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-315064-m04"
	
	
	==> kube-controller-manager [41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef] <==
	I0318 21:00:24.623138       1 serving.go:348] Generated self-signed cert in-memory
	I0318 21:00:24.795515       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 21:00:24.795568       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:00:24.796875       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 21:00:24.797101       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 21:00:24.797439       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 21:00:24.797929       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0318 21:00:34.976143       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7] <==
	E0318 21:00:14.753301       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-315064": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:33.184506       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-315064": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 21:00:33.184591       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0318 21:00:33.252251       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 21:00:33.252321       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:00:33.255296       1 server_others.go:152] "Using iptables Proxier"
	I0318 21:00:33.255439       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:00:33.255804       1 server.go:846] "Version info" version="v1.28.4"
	I0318 21:00:33.256161       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:00:33.257994       1 config.go:188] "Starting service config controller"
	I0318 21:00:33.258226       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:00:33.258333       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:00:33.258360       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:00:33.259220       1 config.go:315] "Starting node config controller"
	I0318 21:00:33.259614       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0318 21:00:36.261450       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0318 21:00:36.261746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:36.265958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 21:00:36.262144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:36.266092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 21:00:36.262264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:36.266166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 21:00:37.462881       1 shared_informer.go:318] Caches are synced for node config
	I0318 21:00:37.860152       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:00:37.959510       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a] <==
	E0318 20:56:33.760738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:33.760552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:33.760900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:42.144651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:42.144728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:42.144650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:42.144752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:42.144864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:42.144914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:55.073524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:55.073849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:55.074244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:55.074372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:55.074521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:55.074576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:10.433543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:10.433670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:13.505737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:13.505807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:13.505742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:13.505844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:44.229455       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:44.229674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:59.586237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:59.586975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62] <==
	E0318 20:57:53.434839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 20:57:53.813687       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 20:57:53.813741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 20:57:53.824675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 20:57:53.824749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 20:57:53.962557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 20:57:53.962660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 20:57:54.386941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 20:57:54.387086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 20:57:54.488944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 20:57:54.489089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 20:57:54.653222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 20:57:54.653374       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 20:57:54.705367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 20:57:54.705458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 20:57:54.786211       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 20:57:54.786238       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 20:57:54.906322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 20:57:54.906413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 20:57:59.960893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 20:57:59.960988       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 20:58:00.185004       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 20:58:00.185228       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 20:58:00.185417       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0318 20:58:00.185589       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701] <==
	W0318 21:00:29.121632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.79:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:29.121699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.79:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:29.195713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:29.195792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:30.849327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:30.849425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:31.232870       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:31.232941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:31.566400       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.79:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:31.566470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.79:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:31.850133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:31.850204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:32.508767       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.79:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:32.508848       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.79:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:34.965755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 21:00:34.969243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 21:00:34.968618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 21:00:34.969364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 21:00:34.968652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 21:00:34.969454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 21:00:34.968732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 21:00:34.969503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 21:00:34.968815       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 21:00:34.969549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 21:00:47.854685       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 21:00:36 ha-315064 kubelet[1363]: E0318 21:00:36.257536    1363 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-315064\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-315064?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 18 21:00:36 ha-315064 kubelet[1363]: E0318 21:00:36.258674    1363 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Mar 18 21:00:36 ha-315064 kubelet[1363]: E0318 21:00:36.256947    1363 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-315064?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Mar 18 21:00:36 ha-315064 kubelet[1363]: W0318 21:00:36.257853    1363 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=1926": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 18 21:00:36 ha-315064 kubelet[1363]: E0318 21:00:36.258940    1363 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=1926": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 18 21:00:38 ha-315064 kubelet[1363]: I0318 21:00:38.713108    1363 scope.go:117] "RemoveContainer" containerID="d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3"
	Mar 18 21:00:38 ha-315064 kubelet[1363]: E0318 21:00:38.713784    1363 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-tbghx_kube-system(9c5ae7df-5e40-42ca-b8e6-d7bbc335e065)\"" pod="kube-system/kindnet-tbghx" podUID="9c5ae7df-5e40-42ca-b8e6-d7bbc335e065"
	Mar 18 21:00:41 ha-315064 kubelet[1363]: I0318 21:00:41.019657    1363 scope.go:117] "RemoveContainer" containerID="41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef"
	Mar 18 21:00:41 ha-315064 kubelet[1363]: E0318 21:00:41.020420    1363 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-315064_kube-system(011b56247b514cfea4dc3b2076428e51)\"" pod="kube-system/kube-controller-manager-ha-315064" podUID="011b56247b514cfea4dc3b2076428e51"
	Mar 18 21:00:43 ha-315064 kubelet[1363]: I0318 21:00:43.712228    1363 scope.go:117] "RemoveContainer" containerID="7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099"
	Mar 18 21:00:43 ha-315064 kubelet[1363]: I0318 21:00:43.740543    1363 scope.go:117] "RemoveContainer" containerID="41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef"
	Mar 18 21:00:43 ha-315064 kubelet[1363]: E0318 21:00:43.741116    1363 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-315064_kube-system(011b56247b514cfea4dc3b2076428e51)\"" pod="kube-system/kube-controller-manager-ha-315064" podUID="011b56247b514cfea4dc3b2076428e51"
	Mar 18 21:00:50 ha-315064 kubelet[1363]: I0318 21:00:50.712487    1363 scope.go:117] "RemoveContainer" containerID="d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3"
	Mar 18 21:00:58 ha-315064 kubelet[1363]: I0318 21:00:58.578885    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-c7lzc" podStartSLOduration=573.811054735 podCreationTimestamp="2024-03-18 20:51:22 +0000 UTC" firstStartedPulling="2024-03-18 20:51:23.479513656 +0000 UTC m=+255.963018731" lastFinishedPulling="2024-03-18 20:51:26.247271635 +0000 UTC m=+258.730776709" observedRunningTime="2024-03-18 20:51:27.012268198 +0000 UTC m=+259.495773292" watchObservedRunningTime="2024-03-18 21:00:58.578812713 +0000 UTC m=+831.062317787"
	Mar 18 21:00:58 ha-315064 kubelet[1363]: I0318 21:00:58.712726    1363 scope.go:117] "RemoveContainer" containerID="41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef"
	Mar 18 21:01:07 ha-315064 kubelet[1363]: E0318 21:01:07.734463    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:01:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:01:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:01:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:01:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:02:07 ha-315064 kubelet[1363]: E0318 21:02:07.736095    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:02:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:02:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:02:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:02:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:02:45.163606   28735 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18421-5321/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-315064 -n ha-315064
helpers_test.go:261: (dbg) Run:  kubectl --context ha-315064 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (410.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 stop -v=7 --alsologtostderr: exit status 82 (2m0.478209899s)

                                                
                                                
-- stdout --
	* Stopping node "ha-315064-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:03:05.394512   29131 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:03:05.394604   29131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:03:05.394613   29131 out.go:304] Setting ErrFile to fd 2...
	I0318 21:03:05.394617   29131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:03:05.394804   29131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:03:05.395013   29131 out.go:298] Setting JSON to false
	I0318 21:03:05.395081   29131 mustload.go:65] Loading cluster: ha-315064
	I0318 21:03:05.395422   29131 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:03:05.395503   29131 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 21:03:05.395672   29131 mustload.go:65] Loading cluster: ha-315064
	I0318 21:03:05.395792   29131 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:03:05.395821   29131 stop.go:39] StopHost: ha-315064-m04
	I0318 21:03:05.396172   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:03:05.396218   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:03:05.411408   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I0318 21:03:05.411818   29131 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:03:05.412414   29131 main.go:141] libmachine: Using API Version  1
	I0318 21:03:05.412441   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:03:05.412825   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:03:05.415189   29131 out.go:177] * Stopping node "ha-315064-m04"  ...
	I0318 21:03:05.416871   29131 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 21:03:05.416917   29131 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 21:03:05.417169   29131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 21:03:05.417202   29131 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 21:03:05.419981   29131 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 21:03:05.420444   29131 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 22:02:32 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 21:03:05.420463   29131 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 21:03:05.420652   29131 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 21:03:05.420789   29131 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 21:03:05.420920   29131 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 21:03:05.421058   29131 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	I0318 21:03:05.510802   29131 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 21:03:05.565151   29131 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 21:03:05.619496   29131 main.go:141] libmachine: Stopping "ha-315064-m04"...
	I0318 21:03:05.619549   29131 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 21:03:05.621200   29131 main.go:141] libmachine: (ha-315064-m04) Calling .Stop
	I0318 21:03:05.624831   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 0/120
	I0318 21:03:06.626096   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 1/120
	I0318 21:03:07.627444   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 2/120
	I0318 21:03:08.628541   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 3/120
	I0318 21:03:09.630195   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 4/120
	I0318 21:03:10.632151   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 5/120
	I0318 21:03:11.633938   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 6/120
	I0318 21:03:12.635669   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 7/120
	I0318 21:03:13.637602   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 8/120
	I0318 21:03:14.639297   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 9/120
	I0318 21:03:15.641579   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 10/120
	I0318 21:03:16.643286   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 11/120
	I0318 21:03:17.644568   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 12/120
	I0318 21:03:18.645837   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 13/120
	I0318 21:03:19.647615   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 14/120
	I0318 21:03:20.649523   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 15/120
	I0318 21:03:21.651293   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 16/120
	I0318 21:03:22.652823   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 17/120
	I0318 21:03:23.654501   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 18/120
	I0318 21:03:24.655826   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 19/120
	I0318 21:03:25.658264   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 20/120
	I0318 21:03:26.660447   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 21/120
	I0318 21:03:27.661915   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 22/120
	I0318 21:03:28.663405   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 23/120
	I0318 21:03:29.664720   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 24/120
	I0318 21:03:30.666739   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 25/120
	I0318 21:03:31.668011   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 26/120
	I0318 21:03:32.669568   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 27/120
	I0318 21:03:33.671285   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 28/120
	I0318 21:03:34.673026   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 29/120
	I0318 21:03:35.674938   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 30/120
	I0318 21:03:36.676185   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 31/120
	I0318 21:03:37.677431   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 32/120
	I0318 21:03:38.679238   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 33/120
	I0318 21:03:39.681136   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 34/120
	I0318 21:03:40.682638   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 35/120
	I0318 21:03:41.683978   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 36/120
	I0318 21:03:42.685494   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 37/120
	I0318 21:03:43.687213   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 38/120
	I0318 21:03:44.689366   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 39/120
	I0318 21:03:45.691439   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 40/120
	I0318 21:03:46.692789   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 41/120
	I0318 21:03:47.694938   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 42/120
	I0318 21:03:48.696211   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 43/120
	I0318 21:03:49.697425   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 44/120
	I0318 21:03:50.699158   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 45/120
	I0318 21:03:51.700645   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 46/120
	I0318 21:03:52.702127   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 47/120
	I0318 21:03:53.703412   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 48/120
	I0318 21:03:54.704686   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 49/120
	I0318 21:03:55.706592   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 50/120
	I0318 21:03:56.707878   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 51/120
	I0318 21:03:57.709504   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 52/120
	I0318 21:03:58.710706   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 53/120
	I0318 21:03:59.712289   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 54/120
	I0318 21:04:00.713935   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 55/120
	I0318 21:04:01.715098   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 56/120
	I0318 21:04:02.716292   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 57/120
	I0318 21:04:03.717829   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 58/120
	I0318 21:04:04.719179   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 59/120
	I0318 21:04:05.721146   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 60/120
	I0318 21:04:06.723310   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 61/120
	I0318 21:04:07.725828   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 62/120
	I0318 21:04:08.727261   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 63/120
	I0318 21:04:09.728782   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 64/120
	I0318 21:04:10.731160   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 65/120
	I0318 21:04:11.732436   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 66/120
	I0318 21:04:12.733694   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 67/120
	I0318 21:04:13.734889   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 68/120
	I0318 21:04:14.736564   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 69/120
	I0318 21:04:15.738274   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 70/120
	I0318 21:04:16.739998   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 71/120
	I0318 21:04:17.741275   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 72/120
	I0318 21:04:18.742535   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 73/120
	I0318 21:04:19.744533   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 74/120
	I0318 21:04:20.745661   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 75/120
	I0318 21:04:21.747297   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 76/120
	I0318 21:04:22.748615   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 77/120
	I0318 21:04:23.750215   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 78/120
	I0318 21:04:24.751342   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 79/120
	I0318 21:04:25.753025   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 80/120
	I0318 21:04:26.754228   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 81/120
	I0318 21:04:27.755451   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 82/120
	I0318 21:04:28.756828   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 83/120
	I0318 21:04:29.758322   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 84/120
	I0318 21:04:30.760182   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 85/120
	I0318 21:04:31.761595   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 86/120
	I0318 21:04:32.763260   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 87/120
	I0318 21:04:33.764639   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 88/120
	I0318 21:04:34.765906   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 89/120
	I0318 21:04:35.767142   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 90/120
	I0318 21:04:36.768498   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 91/120
	I0318 21:04:37.769731   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 92/120
	I0318 21:04:38.771290   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 93/120
	I0318 21:04:39.772543   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 94/120
	I0318 21:04:40.774205   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 95/120
	I0318 21:04:41.775539   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 96/120
	I0318 21:04:42.776963   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 97/120
	I0318 21:04:43.778398   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 98/120
	I0318 21:04:44.780945   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 99/120
	I0318 21:04:45.783017   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 100/120
	I0318 21:04:46.784356   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 101/120
	I0318 21:04:47.785623   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 102/120
	I0318 21:04:48.787471   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 103/120
	I0318 21:04:49.788759   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 104/120
	I0318 21:04:50.790521   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 105/120
	I0318 21:04:51.791937   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 106/120
	I0318 21:04:52.793207   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 107/120
	I0318 21:04:53.794420   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 108/120
	I0318 21:04:54.795733   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 109/120
	I0318 21:04:55.797145   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 110/120
	I0318 21:04:56.798524   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 111/120
	I0318 21:04:57.799993   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 112/120
	I0318 21:04:58.801175   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 113/120
	I0318 21:04:59.803282   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 114/120
	I0318 21:05:00.805110   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 115/120
	I0318 21:05:01.807380   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 116/120
	I0318 21:05:02.808711   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 117/120
	I0318 21:05:03.809900   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 118/120
	I0318 21:05:04.811163   29131 main.go:141] libmachine: (ha-315064-m04) Waiting for machine to stop 119/120
	I0318 21:05:05.811658   29131 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 21:05:05.811703   29131 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 21:05:05.813598   29131 out.go:177] 
	W0318 21:05:05.814850   29131 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 21:05:05.814873   29131 out.go:239] * 
	* 
	W0318 21:05:05.817204   29131 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 21:05:05.818711   29131 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-315064 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
E0318 21:05:14.158233   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:05:23.236367   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr: exit status 3 (18.953480343s)

                                                
                                                
-- stdout --
	ha-315064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315064-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:05:05.876121   29459 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:05:05.876268   29459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:05:05.876278   29459 out.go:304] Setting ErrFile to fd 2...
	I0318 21:05:05.876283   29459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:05:05.876450   29459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:05:05.876621   29459 out.go:298] Setting JSON to false
	I0318 21:05:05.876648   29459 mustload.go:65] Loading cluster: ha-315064
	I0318 21:05:05.876692   29459 notify.go:220] Checking for updates...
	I0318 21:05:05.877085   29459 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:05:05.877119   29459 status.go:255] checking status of ha-315064 ...
	I0318 21:05:05.877516   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:05.877578   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:05.893034   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0318 21:05:05.893437   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:05.893930   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:05.893955   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:05.894311   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:05.894536   29459 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 21:05:05.895843   29459 status.go:330] ha-315064 host status = "Running" (err=<nil>)
	I0318 21:05:05.895857   29459 host.go:66] Checking if "ha-315064" exists ...
	I0318 21:05:05.896124   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:05.896155   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:05.910108   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0318 21:05:05.910489   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:05.910895   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:05.910918   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:05.911248   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:05.911419   29459 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 21:05:05.914223   29459 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 21:05:05.914615   29459 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 21:05:05.914641   29459 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 21:05:05.914784   29459 host.go:66] Checking if "ha-315064" exists ...
	I0318 21:05:05.915046   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:05.915084   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:05.928964   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38327
	I0318 21:05:05.929337   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:05.929748   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:05.929770   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:05.930144   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:05.930337   29459 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 21:05:05.930515   29459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 21:05:05.930547   29459 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 21:05:05.932933   29459 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 21:05:05.933347   29459 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 21:05:05.933378   29459 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 21:05:05.933458   29459 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 21:05:05.933608   29459 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 21:05:05.933764   29459 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 21:05:05.933918   29459 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 21:05:06.020041   29459 ssh_runner.go:195] Run: systemctl --version
	I0318 21:05:06.032442   29459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:05:06.054041   29459 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 21:05:06.054066   29459 api_server.go:166] Checking apiserver status ...
	I0318 21:05:06.054101   29459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:05:06.070332   29459 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5308/cgroup
	W0318 21:05:06.083024   29459 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5308/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:05:06.083062   29459 ssh_runner.go:195] Run: ls
	I0318 21:05:06.088142   29459 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 21:05:06.093012   29459 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 21:05:06.093037   29459 status.go:422] ha-315064 apiserver status = Running (err=<nil>)
	I0318 21:05:06.093049   29459 status.go:257] ha-315064 status: &{Name:ha-315064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 21:05:06.093085   29459 status.go:255] checking status of ha-315064-m02 ...
	I0318 21:05:06.093445   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:06.093484   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:06.107591   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I0318 21:05:06.107971   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:06.108398   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:06.108417   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:06.108682   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:06.108887   29459 main.go:141] libmachine: (ha-315064-m02) Calling .GetState
	I0318 21:05:06.110318   29459 status.go:330] ha-315064-m02 host status = "Running" (err=<nil>)
	I0318 21:05:06.110337   29459 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 21:05:06.110714   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:06.110756   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:06.125619   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0318 21:05:06.125979   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:06.126439   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:06.126481   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:06.126786   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:06.126996   29459 main.go:141] libmachine: (ha-315064-m02) Calling .GetIP
	I0318 21:05:06.129982   29459 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 21:05:06.130428   29459 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:59:56 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 21:05:06.130468   29459 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 21:05:06.130569   29459 host.go:66] Checking if "ha-315064-m02" exists ...
	I0318 21:05:06.130857   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:06.130889   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:06.144670   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
	I0318 21:05:06.145040   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:06.145474   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:06.145500   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:06.145854   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:06.146039   29459 main.go:141] libmachine: (ha-315064-m02) Calling .DriverName
	I0318 21:05:06.146191   29459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 21:05:06.146214   29459 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHHostname
	I0318 21:05:06.148892   29459 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 21:05:06.149351   29459 main.go:141] libmachine: (ha-315064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:47:db", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:59:56 +0000 UTC Type:0 Mac:52:54:00:83:47:db Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-315064-m02 Clientid:01:52:54:00:83:47:db}
	I0318 21:05:06.149375   29459 main.go:141] libmachine: (ha-315064-m02) DBG | domain ha-315064-m02 has defined IP address 192.168.39.231 and MAC address 52:54:00:83:47:db in network mk-ha-315064
	I0318 21:05:06.149542   29459 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHPort
	I0318 21:05:06.149708   29459 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHKeyPath
	I0318 21:05:06.149867   29459 main.go:141] libmachine: (ha-315064-m02) Calling .GetSSHUsername
	I0318 21:05:06.150000   29459 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m02/id_rsa Username:docker}
	I0318 21:05:06.235024   29459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:05:06.252142   29459 kubeconfig.go:125] found "ha-315064" server: "https://192.168.39.254:8443"
	I0318 21:05:06.252174   29459 api_server.go:166] Checking apiserver status ...
	I0318 21:05:06.252214   29459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:05:06.268743   29459 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	W0318 21:05:06.279933   29459 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:05:06.279994   29459 ssh_runner.go:195] Run: ls
	I0318 21:05:06.284979   29459 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 21:05:06.289981   29459 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 21:05:06.290002   29459 status.go:422] ha-315064-m02 apiserver status = Running (err=<nil>)
	I0318 21:05:06.290012   29459 status.go:257] ha-315064-m02 status: &{Name:ha-315064-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 21:05:06.290048   29459 status.go:255] checking status of ha-315064-m04 ...
	I0318 21:05:06.290360   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:06.290400   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:06.305889   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0318 21:05:06.306372   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:06.306905   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:06.306927   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:06.307219   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:06.307407   29459 main.go:141] libmachine: (ha-315064-m04) Calling .GetState
	I0318 21:05:06.308889   29459 status.go:330] ha-315064-m04 host status = "Running" (err=<nil>)
	I0318 21:05:06.308923   29459 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 21:05:06.309261   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:06.309302   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:06.325708   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0318 21:05:06.326112   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:06.326550   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:06.326573   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:06.326885   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:06.327086   29459 main.go:141] libmachine: (ha-315064-m04) Calling .GetIP
	I0318 21:05:06.329589   29459 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 21:05:06.330012   29459 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 22:02:32 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 21:05:06.330040   29459 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 21:05:06.330172   29459 host.go:66] Checking if "ha-315064-m04" exists ...
	I0318 21:05:06.330512   29459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:05:06.330551   29459 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:05:06.345724   29459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I0318 21:05:06.346047   29459 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:05:06.346445   29459 main.go:141] libmachine: Using API Version  1
	I0318 21:05:06.346464   29459 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:05:06.346747   29459 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:05:06.346951   29459 main.go:141] libmachine: (ha-315064-m04) Calling .DriverName
	I0318 21:05:06.347122   29459 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 21:05:06.347141   29459 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHHostname
	I0318 21:05:06.349535   29459 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 21:05:06.349934   29459 main.go:141] libmachine: (ha-315064-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:ee:1a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 22:02:32 +0000 UTC Type:0 Mac:52:54:00:ed:ee:1a Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-315064-m04 Clientid:01:52:54:00:ed:ee:1a}
	I0318 21:05:06.349959   29459 main.go:141] libmachine: (ha-315064-m04) DBG | domain ha-315064-m04 has defined IP address 192.168.39.253 and MAC address 52:54:00:ed:ee:1a in network mk-ha-315064
	I0318 21:05:06.350124   29459 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHPort
	I0318 21:05:06.350282   29459 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHKeyPath
	I0318 21:05:06.350440   29459 main.go:141] libmachine: (ha-315064-m04) Calling .GetSSHUsername
	I0318 21:05:06.350594   29459 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064-m04/id_rsa Username:docker}
	W0318 21:05:24.773090   29459 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.253:22: connect: no route to host
	W0318 21:05:24.773186   29459 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	E0318 21:05:24.773206   29459 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host
	I0318 21:05:24.773213   29459 status.go:257] ha-315064-m04 status: &{Name:ha-315064-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0318 21:05:24.773230   29459 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.253:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-315064 -n ha-315064
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-315064 logs -n 25: (1.869777076s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m04 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp testdata/cp-test.txt                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064:/home/docker/cp-test_ha-315064-m04_ha-315064.txt                      |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064 sudo cat                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064.txt                                |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m02:/home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m02 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m03:/home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n                                                                | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | ha-315064-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-315064 ssh -n ha-315064-m03 sudo cat                                         | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC | 18 Mar 24 20:52 UTC |
	|         | /home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-315064 node stop m02 -v=7                                                    | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-315064 node start m02 -v=7                                                   | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:54 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-315064 -v=7                                                          | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:55 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-315064 -v=7                                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:55 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-315064 --wait=true -v=7                                                   | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 20:57 UTC | 18 Mar 24 21:02 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-315064                                                               | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 21:02 UTC |                     |
	| node    | ha-315064 node delete m03 -v=7                                                  | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 21:02 UTC | 18 Mar 24 21:03 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-315064 stop -v=7                                                             | ha-315064 | jenkins | v1.32.0 | 18 Mar 24 21:03 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:57:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:57:58.990415   27593 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:57:58.990580   27593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:57:58.990594   27593 out.go:304] Setting ErrFile to fd 2...
	I0318 20:57:58.990599   27593 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:57:58.990816   27593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:57:58.991358   27593 out.go:298] Setting JSON to false
	I0318 20:57:58.992243   27593 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2423,"bootTime":1710793056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:57:58.992305   27593 start.go:139] virtualization: kvm guest
	I0318 20:57:58.994922   27593 out.go:177] * [ha-315064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:57:58.996877   27593 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:57:58.998425   27593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:57:58.996890   27593 notify.go:220] Checking for updates...
	I0318 20:57:59.001368   27593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:57:59.002948   27593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:57:59.004493   27593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:57:59.005940   27593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:57:59.007713   27593 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:57:59.007845   27593 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:57:59.008215   27593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:57:59.008277   27593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:57:59.029144   27593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0318 20:57:59.029616   27593 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:57:59.030232   27593 main.go:141] libmachine: Using API Version  1
	I0318 20:57:59.030258   27593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:57:59.030597   27593 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:57:59.030797   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:57:59.065815   27593 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 20:57:59.067338   27593 start.go:297] selected driver: kvm2
	I0318 20:57:59.067351   27593 start.go:901] validating driver "kvm2" against &{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:57:59.067496   27593 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:57:59.067925   27593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:57:59.068004   27593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:57:59.081919   27593 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:57:59.082630   27593 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 20:57:59.082711   27593 cni.go:84] Creating CNI manager for ""
	I0318 20:57:59.082725   27593 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 20:57:59.082786   27593 start.go:340] cluster config:
	{Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:57:59.082951   27593 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:57:59.085701   27593 out.go:177] * Starting "ha-315064" primary control-plane node in "ha-315064" cluster
	I0318 20:57:59.087140   27593 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:57:59.087175   27593 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:57:59.087184   27593 cache.go:56] Caching tarball of preloaded images
	I0318 20:57:59.087253   27593 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 20:57:59.087263   27593 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:57:59.087379   27593 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/config.json ...
	I0318 20:57:59.087556   27593 start.go:360] acquireMachinesLock for ha-315064: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 20:57:59.087603   27593 start.go:364] duration metric: took 26.79µs to acquireMachinesLock for "ha-315064"
	I0318 20:57:59.087623   27593 start.go:96] Skipping create...Using existing machine configuration
	I0318 20:57:59.087632   27593 fix.go:54] fixHost starting: 
	I0318 20:57:59.087984   27593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:57:59.088028   27593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:57:59.101479   27593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0318 20:57:59.101899   27593 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:57:59.103507   27593 main.go:141] libmachine: Using API Version  1
	I0318 20:57:59.103531   27593 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:57:59.103856   27593 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:57:59.104099   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:57:59.104251   27593 main.go:141] libmachine: (ha-315064) Calling .GetState
	I0318 20:57:59.105740   27593 fix.go:112] recreateIfNeeded on ha-315064: state=Running err=<nil>
	W0318 20:57:59.105769   27593 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 20:57:59.107865   27593 out.go:177] * Updating the running kvm2 "ha-315064" VM ...
	I0318 20:57:59.109205   27593 machine.go:94] provisionDockerMachine start ...
	I0318 20:57:59.109223   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:57:59.109400   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.111955   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.112392   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.112418   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.112534   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.112701   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.112848   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.112983   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.113136   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:57:59.113300   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:57:59.113311   27593 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 20:57:59.226616   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064
	
	I0318 20:57:59.226651   27593 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:57:59.226896   27593 buildroot.go:166] provisioning hostname "ha-315064"
	I0318 20:57:59.226922   27593 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:57:59.227141   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.229699   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.230124   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.230162   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.230303   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.230503   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.230661   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.230794   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.230999   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:57:59.231202   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:57:59.231217   27593 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-315064 && echo "ha-315064" | sudo tee /etc/hostname
	I0318 20:57:59.366741   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-315064
	
	I0318 20:57:59.366770   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.369595   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.369969   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.369996   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.370225   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.370385   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.370540   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.370724   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.370893   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:57:59.371095   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:57:59.371116   27593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-315064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-315064/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-315064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 20:57:59.482665   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 20:57:59.482693   27593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 20:57:59.482712   27593 buildroot.go:174] setting up certificates
	I0318 20:57:59.482721   27593 provision.go:84] configureAuth start
	I0318 20:57:59.482729   27593 main.go:141] libmachine: (ha-315064) Calling .GetMachineName
	I0318 20:57:59.483002   27593 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:57:59.486002   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.486527   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.486561   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.486753   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.488827   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.489224   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.489252   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.489358   27593 provision.go:143] copyHostCerts
	I0318 20:57:59.489392   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:57:59.489444   27593 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 20:57:59.489456   27593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 20:57:59.489542   27593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 20:57:59.489652   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:57:59.489677   27593 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 20:57:59.489683   27593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 20:57:59.489724   27593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 20:57:59.489801   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:57:59.489836   27593 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 20:57:59.489849   27593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 20:57:59.489892   27593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 20:57:59.489977   27593 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.ha-315064 san=[127.0.0.1 192.168.39.79 ha-315064 localhost minikube]
	I0318 20:57:59.889061   27593 provision.go:177] copyRemoteCerts
	I0318 20:57:59.889115   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 20:57:59.889137   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:57:59.891532   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.891941   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:57:59.891968   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:57:59.892121   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:57:59.892304   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:57:59.892508   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:57:59.892664   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:57:59.976447   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 20:57:59.976538   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 20:58:00.006401   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 20:58:00.006481   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 20:58:00.037850   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 20:58:00.037919   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 20:58:00.065813   27593 provision.go:87] duration metric: took 583.079708ms to configureAuth
	I0318 20:58:00.065842   27593 buildroot.go:189] setting minikube options for container-runtime
	I0318 20:58:00.066124   27593 config.go:182] Loaded profile config "ha-315064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:58:00.066212   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:58:00.068756   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:58:00.069195   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:58:00.069222   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:58:00.069352   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:58:00.069540   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:58:00.069729   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:58:00.069908   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:58:00.070095   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:58:00.070247   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:58:00.070262   27593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 20:59:31.052953   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 20:59:31.052982   27593 machine.go:97] duration metric: took 1m31.943763396s to provisionDockerMachine
	I0318 20:59:31.052996   27593 start.go:293] postStartSetup for "ha-315064" (driver="kvm2")
	I0318 20:59:31.053028   27593 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 20:59:31.053049   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.053414   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 20:59:31.053450   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.056467   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.056990   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.057017   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.057163   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.057320   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.057457   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.057589   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:59:31.140700   27593 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 20:59:31.145337   27593 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 20:59:31.145360   27593 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 20:59:31.145412   27593 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 20:59:31.145486   27593 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 20:59:31.145497   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 20:59:31.145574   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 20:59:31.155749   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:59:31.183080   27593 start.go:296] duration metric: took 130.073154ms for postStartSetup
	I0318 20:59:31.183124   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.183414   27593 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0318 20:59:31.183438   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.186031   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.186422   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.186463   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.186619   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.186818   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.186958   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.187098   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	W0318 20:59:31.267547   27593 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0318 20:59:31.267568   27593 fix.go:56] duration metric: took 1m32.179937374s for fixHost
	I0318 20:59:31.267588   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.270318   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.270790   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.270821   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.270939   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.271126   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.271301   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.271408   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.271564   27593 main.go:141] libmachine: Using SSH client type: native
	I0318 20:59:31.271763   27593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0318 20:59:31.271780   27593 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 20:59:31.374041   27593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710795571.345894013
	
	I0318 20:59:31.374063   27593 fix.go:216] guest clock: 1710795571.345894013
	I0318 20:59:31.374069   27593 fix.go:229] Guest: 2024-03-18 20:59:31.345894013 +0000 UTC Remote: 2024-03-18 20:59:31.267574664 +0000 UTC m=+92.328918413 (delta=78.319349ms)
	I0318 20:59:31.374086   27593 fix.go:200] guest clock delta is within tolerance: 78.319349ms
	I0318 20:59:31.374091   27593 start.go:83] releasing machines lock for "ha-315064", held for 1m32.286477281s
	I0318 20:59:31.374107   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.374385   27593 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:59:31.377019   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.377404   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.377424   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.377584   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.378138   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.378314   27593 main.go:141] libmachine: (ha-315064) Calling .DriverName
	I0318 20:59:31.378398   27593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 20:59:31.378447   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.378529   27593 ssh_runner.go:195] Run: cat /version.json
	I0318 20:59:31.378548   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHHostname
	I0318 20:59:31.380830   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381262   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.381285   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381299   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381463   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.381626   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.381779   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:31.381811   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:31.381890   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.381938   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHPort
	I0318 20:59:31.382037   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:59:31.382076   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHKeyPath
	I0318 20:59:31.382209   27593 main.go:141] libmachine: (ha-315064) Calling .GetSSHUsername
	I0318 20:59:31.382357   27593 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/ha-315064/id_rsa Username:docker}
	I0318 20:59:31.459059   27593 ssh_runner.go:195] Run: systemctl --version
	I0318 20:59:31.482910   27593 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 20:59:31.653491   27593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 20:59:31.661490   27593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 20:59:31.661569   27593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 20:59:31.672371   27593 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 20:59:31.672392   27593 start.go:494] detecting cgroup driver to use...
	I0318 20:59:31.672445   27593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 20:59:31.690263   27593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 20:59:31.705517   27593 docker.go:217] disabling cri-docker service (if available) ...
	I0318 20:59:31.705580   27593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 20:59:31.720644   27593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 20:59:31.735117   27593 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 20:59:31.897780   27593 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 20:59:32.107296   27593 docker.go:233] disabling docker service ...
	I0318 20:59:32.107352   27593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 20:59:32.159372   27593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 20:59:32.175739   27593 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 20:59:32.394495   27593 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 20:59:32.572710   27593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 20:59:32.589737   27593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 20:59:32.610355   27593 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 20:59:32.610429   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.623702   27593 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 20:59:32.623761   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.636402   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.649702   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.661655   27593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 20:59:32.675530   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.688349   27593 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.700484   27593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 20:59:32.712473   27593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 20:59:32.723198   27593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 20:59:32.734314   27593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:59:32.901886   27593 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 20:59:42.697121   27593 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.79519496s)
	I0318 20:59:42.697151   27593 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 20:59:42.697208   27593 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 20:59:42.702795   27593 start.go:562] Will wait 60s for crictl version
	I0318 20:59:42.702834   27593 ssh_runner.go:195] Run: which crictl
	I0318 20:59:42.707268   27593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 20:59:42.751977   27593 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 20:59:42.752054   27593 ssh_runner.go:195] Run: crio --version
	I0318 20:59:42.782120   27593 ssh_runner.go:195] Run: crio --version
	I0318 20:59:42.813735   27593 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 20:59:42.815251   27593 main.go:141] libmachine: (ha-315064) Calling .GetIP
	I0318 20:59:42.817837   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:42.818216   27593 main.go:141] libmachine: (ha-315064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:a5:8a", ip: ""} in network mk-ha-315064: {Iface:virbr1 ExpiryTime:2024-03-18 21:46:37 +0000 UTC Type:0 Mac:52:54:00:3e:a5:8a Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-315064 Clientid:01:52:54:00:3e:a5:8a}
	I0318 20:59:42.818245   27593 main.go:141] libmachine: (ha-315064) DBG | domain ha-315064 has defined IP address 192.168.39.79 and MAC address 52:54:00:3e:a5:8a in network mk-ha-315064
	I0318 20:59:42.818470   27593 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 20:59:42.823704   27593 kubeadm.go:877] updating cluster {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 20:59:42.823847   27593 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:59:42.823895   27593 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:59:42.873949   27593 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:59:42.873965   27593 crio.go:433] Images already preloaded, skipping extraction
	I0318 20:59:42.874012   27593 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 20:59:42.911568   27593 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 20:59:42.911585   27593 cache_images.go:84] Images are preloaded, skipping loading
	I0318 20:59:42.911593   27593 kubeadm.go:928] updating node { 192.168.39.79 8443 v1.28.4 crio true true} ...
	I0318 20:59:42.911712   27593 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-315064 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 20:59:42.911782   27593 ssh_runner.go:195] Run: crio config
	I0318 20:59:42.965674   27593 cni.go:84] Creating CNI manager for ""
	I0318 20:59:42.965693   27593 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 20:59:42.965703   27593 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 20:59:42.965721   27593 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-315064 NodeName:ha-315064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 20:59:42.965855   27593 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-315064"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 20:59:42.965873   27593 kube-vip.go:111] generating kube-vip config ...
	I0318 20:59:42.965911   27593 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 20:59:42.978464   27593 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 20:59:42.978583   27593 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 20:59:42.978633   27593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 20:59:42.988408   27593 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 20:59:42.988457   27593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 20:59:42.998179   27593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0318 20:59:43.017148   27593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 20:59:43.034816   27593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0318 20:59:43.052943   27593 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 20:59:43.071892   27593 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 20:59:43.075943   27593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 20:59:43.226499   27593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 20:59:43.241719   27593 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064 for IP: 192.168.39.79
	I0318 20:59:43.241736   27593 certs.go:194] generating shared ca certs ...
	I0318 20:59:43.241750   27593 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:59:43.241913   27593 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 20:59:43.241961   27593 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 20:59:43.241972   27593 certs.go:256] generating profile certs ...
	I0318 20:59:43.242051   27593 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/client.key
	I0318 20:59:43.242078   27593 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e
	I0318 20:59:43.242091   27593 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.231 192.168.39.84 192.168.39.254]
	I0318 20:59:43.325257   27593 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e ...
	I0318 20:59:43.325284   27593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e: {Name:mk2a080b7e875c8dea1076aff4dbd4e65753639d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:59:43.325470   27593 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e ...
	I0318 20:59:43.325486   27593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e: {Name:mkec9fd0aa1c43e53fb19a42378c875d41da6f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:59:43.325579   27593 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt.5962ea4e -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt
	I0318 20:59:43.325718   27593 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key.5962ea4e -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key
	I0318 20:59:43.325845   27593 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key
	I0318 20:59:43.325861   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 20:59:43.325873   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 20:59:43.325886   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 20:59:43.325899   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 20:59:43.325911   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 20:59:43.325923   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 20:59:43.325938   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 20:59:43.325955   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 20:59:43.326006   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 20:59:43.326035   27593 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 20:59:43.326044   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 20:59:43.326067   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 20:59:43.326087   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 20:59:43.326107   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 20:59:43.326141   27593 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 20:59:43.326168   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.326182   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.326194   27593 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.326798   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 20:59:43.356074   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 20:59:43.382454   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 20:59:43.409819   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 20:59:43.437134   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 20:59:43.465376   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 20:59:43.493142   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 20:59:43.519214   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/ha-315064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 20:59:43.546113   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 20:59:43.572049   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 20:59:43.598109   27593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 20:59:43.624327   27593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 20:59:43.642379   27593 ssh_runner.go:195] Run: openssl version
	I0318 20:59:43.648476   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 20:59:43.659629   27593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.664670   27593 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.664734   27593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 20:59:43.670737   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 20:59:43.680215   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 20:59:43.691328   27593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.696040   27593 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.696076   27593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 20:59:43.702113   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 20:59:43.711463   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 20:59:43.722633   27593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.727184   27593 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.727227   27593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 20:59:43.733139   27593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 20:59:43.742546   27593 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 20:59:43.747389   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 20:59:43.753196   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 20:59:43.759065   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 20:59:43.764939   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 20:59:43.770652   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 20:59:43.776479   27593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 20:59:43.782806   27593 kubeadm.go:391] StartCluster: {Name:ha-315064 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-315064 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.253 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:59:43.782974   27593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 20:59:43.783035   27593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 20:59:43.828734   27593 cri.go:89] found id: "69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f"
	I0318 20:59:43.828757   27593 cri.go:89] found id: "b00d88c41f5b8f0909774cc7a06c87525bad093fc810f8b1ef0c10dc9d8d804f"
	I0318 20:59:43.828762   27593 cri.go:89] found id: "287440cfd8515950c684aba8aaa59b80068653836e2e952239977bb4dbbd4607"
	I0318 20:59:43.828766   27593 cri.go:89] found id: "6260c164c8ab141652f895ba2381853cc2b2d40476c56b5c34119d998c0458e3"
	I0318 20:59:43.828770   27593 cri.go:89] found id: "72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62"
	I0318 20:59:43.828778   27593 cri.go:89] found id: "3e90a0712d87da93fd95e79c7f48abef2866a72da970869e34c4407785bf1d02"
	I0318 20:59:43.828780   27593 cri.go:89] found id: "10b2ec1f746905109cc4491c15f3a445dccdaa14c18d574788b84b9a12fac375"
	I0318 20:59:43.828783   27593 cri.go:89] found id: "bfac5d0e774172b0c2522b62847344fa38a429790532d0bdbeab76c3c68ebcc7"
	I0318 20:59:43.828785   27593 cri.go:89] found id: "d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea"
	I0318 20:59:43.828790   27593 cri.go:89] found id: "a7126db5f28120b48a6ecfeae91706dcef9ebb4b9a28f58843b50a8e78edc014"
	I0318 20:59:43.828796   27593 cri.go:89] found id: "df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a"
	I0318 20:59:43.828799   27593 cri.go:89] found id: "1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62"
	I0318 20:59:43.828806   27593 cri.go:89] found id: "80a67e792a683e9cd15084fdd458c48aca2fc01666df37f095e8801c1085aa81"
	I0318 20:59:43.828809   27593 cri.go:89] found id: "3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d"
	I0318 20:59:43.828818   27593 cri.go:89] found id: "4480ab4493cfa4ba3e2fec1824c68a08a327a4eaf1e3e3dc0e3b153c0a80990c"
	I0318 20:59:43.828825   27593 cri.go:89] found id: ""
	I0318 20:59:43.828871   27593 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.456348477Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795925456320913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d353ff45-17f5-41e7-94a8-9b8ef1ae7515 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.456846432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0aa01ac6-ac59-4749-b729-2eb25e66ea31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.456931188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0aa01ac6-ac59-4749-b729-2eb25e66ea31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.457462702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0aa01ac6-ac59-4749-b729-2eb25e66ea31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.509212175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ca8d9bb-a516-455e-a618-ab8c85e428f1 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.509288476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ca8d9bb-a516-455e-a618-ab8c85e428f1 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.510588064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82fee13c-bc81-42c9-b78b-9e4287882c3a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.511628407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795925511600896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82fee13c-bc81-42c9-b78b-9e4287882c3a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.512284699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9099bc8-954a-4179-8a21-f2ddab033d1c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.512341196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9099bc8-954a-4179-8a21-f2ddab033d1c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.513827464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9099bc8-954a-4179-8a21-f2ddab033d1c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.564463547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f2d9ebf-866a-4c34-be18-730a8e613401 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.564544803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f2d9ebf-866a-4c34-be18-730a8e613401 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.565850984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f9ffafe-5460-4a45-8270-b900a8374f7f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.566704057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795925566675844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f9ffafe-5460-4a45-8270-b900a8374f7f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.567787958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06911f69-530d-4383-bd89-d01e73ad3b41 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.567869718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06911f69-530d-4383-bd89-d01e73ad3b41 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.568363201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06911f69-530d-4383-bd89-d01e73ad3b41 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.618593078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4784051-0f79-4768-aa58-4f1ee3068b65 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.618674066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4784051-0f79-4768-aa58-4f1ee3068b65 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.621503093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4896f0f2-f204-4869-8be2-9e4aa8a1a68b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.622100307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710795925622018517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4896f0f2-f204-4869-8be2-9e4aa8a1a68b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.623181413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcfe38fd-c008-4f87-9dc8-f417dacf7a22 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.623238737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcfe38fd-c008-4f87-9dc8-f417dacf7a22 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:05:25 ha-315064 crio[4010]: time="2024-03-18 21:05:25.623645151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710795658731966492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710795650725346849,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5cb2d80a3b6f398e3ba023895fdfcc1514280cd6c7dde9aec739d4c2e898b5,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710795643738474805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710795632726534417,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kubernetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f1d00cae40376cfaf82ae9daa930450935fe4f57e6617936016fae5b654a0a0,PodSandboxId:df412c1c15d89dcb8905a5ec1a48f5fe4a6624e49131f5c71cef9d6d8d3d9d8c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710795624106019855,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef,PodSandboxId:88ecf2864f169c7297cba345ac0eea55b986fadc3f42808095b6f660e4a3b83d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710795623755689596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 011b56247b514cfea4dc3b2076428e51,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termination
MessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac,PodSandboxId:b580e6e0ea5007537751ca2e9337416289cabe7fb286d787f0487728eeaeedb0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795593617382217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metri
cs\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0043c9349ca8faa593d071fcadfff3013fbc8d2b72c4feaa37fc9f2df1f08b3a,PodSandboxId:f8522b2cee4ff6f5a63dec7187e7cae019d9c04dc182a766102bdd8e006f73d6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710795592476913665,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.has
h: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7,PodSandboxId:54ab309e4736cc528ede4df44dc6a518df7c1e4c00e21e9c8b6961306ac76205,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710795590984837301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.contain
er.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dcca6592d242f7b77728fa67d8577fbdbf9d494ef724161d1da281ec0324099,PodSandboxId:700ca15f1576d1d2014da1317a212142c7f03e02aaa9887393af2b58f47e06da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710795590597369427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddebef9-cc69-4535-8dc5-9117878507d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7689e3e2,io.kubernetes.container.restartCou
nt: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3,PodSandboxId:5bfe7f29d452099c96a83e58b967c1654794a3ea34db14d3e5ee513167a2a44f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710795591012124848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tbghx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c5ae7df-5e40-42ca-b8e6-d7bbc335e065,},Annotations:map[string]string{io.kubernetes.container.hash: 73f90006,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b,PodSandboxId:e57c024b5c8e114f99d8263faa5284e5a1444a2bb1bfc3a63df4931c51af535d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710795591239685729,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701,PodSandboxId:3457a60ae7eb5867dd475d1aa7897fcac6f58e8b45e5d5978a1c435fb81582b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710795590683222942,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3,PodSandboxId:a78045a2613b019cf5840300650a45548c930c551cf686e0eec0ce4246f494ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710795590647848103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map
[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b,PodSandboxId:1acef47bba5b0f282adc927e3cd888c42e4bdbc06a3781857c362bf5d9b30fd6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710795590534860494,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9524c4b1818864ef82847de110d9d59a,},Annotations:map[string]string{io.kube
rnetes.container.hash: 8bf59652,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f,PodSandboxId:9333c93c0593d3573c59715027c2026e59f0d374330ed745ed3f149853572126,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710795572186501613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fgqzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245a67a5-7e01-445d-a741-900dd301c127,},Annotations:map[string]string{io.kubernetes.container.hash: cc5d5fe3,
io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72dc2ec14492ddc28a11bca3f5fa8b8526f5fb9d4a5ac809d15ccf14990f1f62,PodSandboxId:154ec2a128fe59f0ce1b1879503baacf779f1fcfb560193ec95cb90ea0d4a320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710795380728384356,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: k
ube-vip-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95034e2848fe757395e864ee468c38aa,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962d0c8af6a9ac625c108ee441b2b77e3adc13729ba696c0b609c87bb11fb820,PodSandboxId:b1e1139d7a57e670374214fdaeccea50d887125b5025a0ab6bc84b904de05397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710795086270972277,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-
5b5d89c9d6-c7lzc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3878d9ed-31cf-4a22-9a2e-9866d43fdb8b,},Annotations:map[string]string{io.kubernetes.container.hash: ccc3082b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea,PodSandboxId:868a925ed8d8e2676664714b058bdd47de81da69a46497a2cf257996e5f42633,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710794843907362809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hrrzn,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: bd22f324-f86b-458f-8443-1fbb4c47521e,},Annotations:map[string]string{io.kubernetes.container.hash: e6b8ce27,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a,PodSandboxId:01b267bb0cc88730f1a461f9cc9036266bb0e66a9b44b28eff4d4006d82e3983,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3
a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710794837867814543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrm24,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b686bb37-4624-4b09-b335-d292a914e41c,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28a28f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62,PodSandboxId:b8f2e721ddf5c4f026dc84daab3047b0076a2145e040615335d60d00acc9fa35,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_E
XITED,CreatedAt:1710794818263785174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6c104d584739b45afeee644d28478c9,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d,PodSandboxId:2223b5076d0b6a9c19b3abcaceaa84a042e434df0b1f13533e040fd0a87787ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710794818183840
430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-315064,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455fc330bc32275f51604045163662be,},Annotations:map[string]string{io.kubernetes.container.hash: 5d14dc4b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcfe38fd-c008-4f87-9dc8-f417dacf7a22 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	037f74b5576e6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   3                   88ecf2864f169       kube-controller-manager-ha-315064
	925e697415c9d       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   5bfe7f29d4520       kindnet-tbghx
	0a5cb2d80a3b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   700ca15f1576d       storage-provisioner
	0d936680575ab       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   1acef47bba5b0       kube-apiserver-ha-315064
	8f1d00cae4037       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   df412c1c15d89       busybox-5b5d89c9d6-c7lzc
	41aa0b241e9bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   2                   88ecf2864f169       kube-controller-manager-ha-315064
	0e6b228e5b035       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   2                   b580e6e0ea500       coredns-5dd5756b68-fgqzg
	0043c9349ca8f       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  3                   f8522b2cee4ff       kube-vip-ha-315064
	1a2f6a0548a18       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   e57c024b5c8e1       coredns-5dd5756b68-hrrzn
	d26255f506377       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   5bfe7f29d4520       kindnet-tbghx
	93d601359a854       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   54ab309e4736c       kube-proxy-wrm24
	d7de86ecadf35       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   3457a60ae7eb5       kube-scheduler-ha-315064
	827286fc4f58d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   a78045a2613b0       etcd-ha-315064
	7dcca6592d242       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   700ca15f1576d       storage-provisioner
	cf4b0f5d3ae02       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   1acef47bba5b0       kube-apiserver-ha-315064
	69b53dacdf2d5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Exited              coredns                   1                   9333c93c0593d       coredns-5dd5756b68-fgqzg
	72dc2ec14492d       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      9 minutes ago       Exited              kube-vip                  2                   154ec2a128fe5       kube-vip-ha-315064
	962d0c8af6a9a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   b1e1139d7a57e       busybox-5b5d89c9d6-c7lzc
	d5c124916621e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      18 minutes ago      Exited              coredns                   0                   868a925ed8d8e       coredns-5dd5756b68-hrrzn
	df303842f5387       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      18 minutes ago      Exited              kube-proxy                0                   01b267bb0cc88       kube-proxy-wrm24
	1a42f9c834d0e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      18 minutes ago      Exited              kube-scheduler            0                   b8f2e721ddf5c       kube-scheduler-ha-315064
	3dfd1d922dc88       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      18 minutes ago      Exited              etcd                      0                   2223b5076d0b6       etcd-ha-315064
	
	
	==> coredns [0e6b228e5b035ccb85f27492a418c288b837f33b71bd608e80d6ab52add8cdac] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39510 - 17999 "HINFO IN 8959287172446377104.761263948973536297. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020278603s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:58396->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [1a2f6a0548a18cdb899bb33ca4e6004b7911d52a84fd5684b35898a95c33693b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58803 - 20913 "HINFO IN 7029768413915660847.7035293484429296979. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015204357s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [69b53dacdf2d5c7f23b07df07e1d82c52989e54fae5a5e41f4ca98a36bf0ab2f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35281 - 44397 "HINFO IN 3605527226997148585.3932514430432415525. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021358967s
	
	
	==> coredns [d5c124916621ee72f2400af64107dfcd65418fa83827f09d5d1e6477ca29d2ea] <==
	[INFO] 10.244.0.4:35472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011908s
	[INFO] 10.244.0.4:59665 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225277s
	[INFO] 10.244.0.4:48478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082298s
	[INFO] 10.244.0.4:58488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037583s
	[INFO] 10.244.0.4:52714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122718s
	[INFO] 10.244.2.2:38213 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144668s
	[INFO] 10.244.2.2:33237 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140758s
	[INFO] 10.244.1.2:55432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156014s
	[INFO] 10.244.1.2:43813 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140774s
	[INFO] 10.244.0.4:56118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008172s
	[INFO] 10.244.0.4:50788 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172997s
	[INFO] 10.244.2.2:59802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176543s
	[INFO] 10.244.2.2:48593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240495s
	[INFO] 10.244.1.2:57527 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153491s
	[INFO] 10.244.1.2:41470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189177s
	[INFO] 10.244.1.2:34055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148936s
	[INFO] 10.244.0.4:58773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274692s
	[INFO] 10.244.0.4:38762 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072594s
	[INFO] 10.244.0.4:34340 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059481s
	[INFO] 10.244.0.4:56101 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011093s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-315064
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T20_47_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:47:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:05:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:00:46 +0000   Mon, 18 Mar 2024 20:47:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-315064
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f9d3eed04b4b99974be1860661f403
	  System UUID:                67f9d3ee-d04b-4b99-974b-e1860661f403
	  Boot ID:                    da42c8d7-0f88-49a8-83c7-2bcbed46eb7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-c7lzc             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-5dd5756b68-fgqzg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-5dd5756b68-hrrzn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-ha-315064                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-tbghx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-315064             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-315064    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-wrm24                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-315064             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-315064                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   Starting                 4m52s                  kube-proxy       
	  Normal   NodeHasSufficientPID     18m                    kubelet          Node ha-315064 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m                    kubelet          Node ha-315064 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                    kubelet          Node ha-315064 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   NodeReady                18m                    kubelet          Node ha-315064 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Warning  ContainerGCFailed        6m19s (x2 over 7m19s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	  Normal   RegisteredNode           3m3s                   node-controller  Node ha-315064 event: Registered Node ha-315064 in Controller
	
	
	Name:               ha-315064-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_49_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:49:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:05:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:01:17 +0000   Mon, 18 Mar 2024 21:00:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-315064-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 84b0eca72c194ee2b4b37351cd8bc63f
	  System UUID:                84b0eca7-2c19-4ee2-b4b3-7351cd8bc63f
	  Boot ID:                    c2d21a9e-046d-4f00-8ea7-ede4fd23ed3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-7z7sj                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-315064-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-dvtw7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-315064-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-315064-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-bccjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-315064-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-315064-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  RegisteredNode           15m                    node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-315064-m02 status is now: NodeNotReady
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-315064-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-315064-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-315064-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	  Normal  RegisteredNode           3m3s                   node-controller  Node ha-315064-m02 event: Registered Node ha-315064-m02 in Controller
	
	
	Name:               ha-315064-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-315064-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=ha-315064
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T20_52_03_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 20:52:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-315064-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:02:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 21:02:38 +0000   Mon, 18 Mar 2024 21:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-315064-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e505b03139344fc9b8ceffed32c9bea6
	  System UUID:                e505b031-3934-4fc9-b8ce-ffed32c9bea6
	  Boot ID:                    ed5e098f-3395-44b1-a126-a6378a97cc9b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-fbnvx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-rwjjr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-dhhjx            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node ha-315064-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node ha-315064-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node ha-315064-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-315064-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   NodeNotReady             3m59s                  node-controller  Node ha-315064-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m3s                   node-controller  Node ha-315064-m04 event: Registered Node ha-315064-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m49s)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m49s)  kubelet          Node ha-315064-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m49s)  kubelet          Node ha-315064-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m49s)  kubelet          Node ha-315064-m04 has been rebooted, boot id: ed5e098f-3395-44b1-a126-a6378a97cc9b
	  Normal   NodeReady                2m48s (x2 over 2m49s)  kubelet          Node ha-315064-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-315064-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.134199] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.060622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061926] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.170895] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.158641] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.304087] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +5.155955] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.063498] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.791144] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.535740] kauditd_printk_skb: 57 callbacks suppressed
	[Mar18 20:47] kauditd_printk_skb: 35 callbacks suppressed
	[  +2.157125] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[ +10.330891] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.069969] kauditd_printk_skb: 36 callbacks suppressed
	[Mar18 20:49] kauditd_printk_skb: 28 callbacks suppressed
	[Mar18 20:59] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.172357] systemd-fstab-generator[3830]: Ignoring "noauto" option for root device
	[  +0.274681] systemd-fstab-generator[3914]: Ignoring "noauto" option for root device
	[  +0.229244] systemd-fstab-generator[3965]: Ignoring "noauto" option for root device
	[  +0.330679] systemd-fstab-generator[3994]: Ignoring "noauto" option for root device
	[ +10.329786] systemd-fstab-generator[4120]: Ignoring "noauto" option for root device
	[  +0.087000] kauditd_printk_skb: 110 callbacks suppressed
	[  +6.921360] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 21:00] kauditd_printk_skb: 95 callbacks suppressed
	[ +29.533604] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [3dfd1d922dc8898a2be8ee7a9762ca3203d1997591302f07e6ba3b413be3713d] <==
	WARNING: 2024/03/18 20:58:00 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T20:58:00.230328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T20:57:52.19856Z","time spent":"8.031753169s","remote":"127.0.0.1:47508","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	WARNING: 2024/03/18 20:58:00 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T20:58:00.245455Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"882bbdde445c6a1a","rtt":"4.075318ms","error":"dial tcp 192.168.39.231:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-18T20:58:00.245631Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"882bbdde445c6a1a","rtt":"12.060143ms","error":"dial tcp 192.168.39.231:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-03-18T20:58:00.33522Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.79:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T20:58:00.335285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.79:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T20:58:00.335427Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a91a1bbc2c758cdc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-18T20:58:00.335562Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335637Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335693Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335747Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335853Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335945Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.335982Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"882bbdde445c6a1a"}
	{"level":"info","ts":"2024-03-18T20:58:00.336097Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336164Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336203Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.33626Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336342Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336397Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.336428Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T20:58:00.33954Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-03-18T20:58:00.339698Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-03-18T20:58:00.339742Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-315064","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.79:2380"],"advertise-client-urls":["https://192.168.39.79:2379"]}
	
	
	==> etcd [827286fc4f58d1bdf1f63ac481f2d31cce704dcee919a6d68c43fc3fb7ca7bc3] <==
	{"level":"info","ts":"2024-03-18T21:02:02.888566Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:02.905613Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"e9d877c1a39931b2","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T21:02:02.905732Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:02.90587Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"e9d877c1a39931b2","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T21:02:02.906109Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:06.393864Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e9d877c1a39931b2","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"3.606015ms"}
	{"level":"warn","ts":"2024-03-18T21:02:06.393957Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"882bbdde445c6a1a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"3.704359ms"}
	{"level":"warn","ts":"2024-03-18T21:02:51.62124Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.84:58986","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-03-18T21:02:51.642274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc switched to configuration voters=(9812144975484054042 12185082236818001116)"}
	{"level":"info","ts":"2024-03-18T21:02:51.642446Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1edb09d3fc38073e","local-member-id":"a91a1bbc2c758cdc","removed-remote-peer-id":"e9d877c1a39931b2","removed-remote-peer-urls":["https://192.168.39.84:2380"]}
	{"level":"info","ts":"2024-03-18T21:02:51.642531Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:51.643328Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:51.64341Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:51.643574Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:51.643606Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:51.643813Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:51.644214Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2","error":"context canceled"}
	{"level":"warn","ts":"2024-03-18T21:02:51.644347Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e9d877c1a39931b2","error":"failed to read e9d877c1a39931b2 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-18T21:02:51.644467Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:51.64471Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-03-18T21:02:51.644773Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:51.644858Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9d877c1a39931b2"}
	{"level":"info","ts":"2024-03-18T21:02:51.644932Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a91a1bbc2c758cdc","removed-remote-peer-id":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:51.662305Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a91a1bbc2c758cdc","remote-peer-id-stream-handler":"a91a1bbc2c758cdc","remote-peer-id-from":"e9d877c1a39931b2"}
	{"level":"warn","ts":"2024-03-18T21:02:51.666373Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a91a1bbc2c758cdc","remote-peer-id-stream-handler":"a91a1bbc2c758cdc","remote-peer-id-from":"e9d877c1a39931b2"}
	
	
	==> kernel <==
	 21:05:26 up 19 min,  0 users,  load average: 0.46, 0.57, 0.42
	Linux ha-315064 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [925e697415c9dc3dff8c6bfe093df3b90a4d0935b77f89159fe2e06278bfacb9] <==
	I0318 21:04:42.192470       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:04:52.284325       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:04:52.284373       1 main.go:227] handling current node
	I0318 21:04:52.284391       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:04:52.284397       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:04:52.284530       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:04:52.284564       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:05:02.300200       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:05:02.300242       1 main.go:227] handling current node
	I0318 21:05:02.300253       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:05:02.300259       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:05:02.300369       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:05:02.300374       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:05:12.313739       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:05:12.313786       1 main.go:227] handling current node
	I0318 21:05:12.313797       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:05:12.313803       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:05:12.313916       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:05:12.313947       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	I0318 21:05:22.321518       1 main.go:223] Handling node with IPs: map[192.168.39.79:{}]
	I0318 21:05:22.321564       1 main.go:227] handling current node
	I0318 21:05:22.321579       1 main.go:223] Handling node with IPs: map[192.168.39.231:{}]
	I0318 21:05:22.321585       1 main.go:250] Node ha-315064-m02 has CIDR [10.244.1.0/24] 
	I0318 21:05:22.321722       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 21:05:22.321753       1 main.go:250] Node ha-315064-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d26255f506377faadfb9d1051601a2769d0d3ab2a2dc34ecff00f93d4b4bedb3] <==
	I0318 20:59:51.606964       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 21:00:01.993315       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0318 21:00:11.994412       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0318 21:00:13.280594       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 21:00:15.282152       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 21:00:19.425385       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [0d936680575ab32f6df3fcc2a550e5e8799430398ab514d4e3a4e2ead00df493] <==
	I0318 21:00:34.952567       1 controller.go:85] Starting OpenAPI V3 controller
	I0318 21:00:34.952602       1 naming_controller.go:291] Starting NamingConditionController
	I0318 21:00:34.952624       1 establishing_controller.go:76] Starting EstablishingController
	I0318 21:00:34.952677       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 21:00:34.952693       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 21:00:34.952736       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 21:00:35.007965       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 21:00:35.008355       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 21:00:35.009225       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 21:00:35.012823       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 21:00:35.012974       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 21:00:35.016089       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 21:00:35.018006       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 21:00:35.018108       1 aggregator.go:166] initial CRD sync complete...
	I0318 21:00:35.018128       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 21:00:35.018134       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 21:00:35.018139       1 cache.go:39] Caches are synced for autoregister controller
	I0318 21:00:35.018452       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	W0318 21:00:35.035179       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.84]
	I0318 21:00:35.039243       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 21:00:35.040635       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 21:00:35.051458       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0318 21:00:35.057864       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0318 21:00:35.914572       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 21:00:36.676732       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.231 192.168.39.79 192.168.39.84]
	
	
	==> kube-apiserver [cf4b0f5d3ae02faa15e5f6f742181db6d2fc2bd90647d14971f743b5b932246b] <==
	I0318 20:59:51.577219       1 options.go:220] external host was not specified, using 192.168.39.79
	I0318 20:59:51.585003       1 server.go:148] Version: v1.28.4
	I0318 20:59:51.587467       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 20:59:52.312566       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 20:59:52.328645       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 20:59:52.328662       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 20:59:52.328968       1 instance.go:298] Using reconciler: lease
	W0318 21:00:12.308944       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0318 21:00:12.311289       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0318 21:00:12.332630       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0318 21:00:12.332672       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [037f74b5576e6bbc24c82d80de8dbe648b4e08d4d52d299880fdcacec772406c] <==
	I0318 21:02:50.485586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.208µs"
	I0318 21:02:50.647865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.05µs"
	I0318 21:02:50.665087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="50.16µs"
	I0318 21:02:50.669561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="98.449µs"
	I0318 21:02:54.156585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="30.962675ms"
	I0318 21:02:54.156697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.605µs"
	I0318 21:03:03.317779       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-315064-m04"
	E0318 21:03:03.386128       1 garbagecollector.go:392] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-315064-m03", UID:"325a2dad-f78a-4f1e-b340-8125aad8cd70", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-315064-m03", UID:"2381489a-8909-4ce6-ac9d-85dc36f9470f", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-315064-m03" not found
	I0318 21:03:05.835194       1 event.go:307] "Event occurred" object="ha-315064-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-315064-m03 event: Removing Node ha-315064-m03 from Controller"
	E0318 21:03:10.826298       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:10.826390       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:10.826427       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:10.826456       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:10.826480       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:30.827192       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:30.827270       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:30.827279       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:30.827294       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	E0318 21:03:30.827300       1 gc_controller.go:153] "Failed to get node" err="node \"ha-315064-m03\" not found" node="ha-315064-m03"
	I0318 21:03:40.863961       1 event.go:307] "Event occurred" object="ha-315064-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-315064-m04 status is now: NodeNotReady"
	I0318 21:03:40.910496       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-fbnvx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:03:40.972503       1 event.go:307] "Event occurred" object="kube-system/kindnet-rwjjr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:03:41.012784       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-dhhjx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:03:41.025439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="117.690871ms"
	I0318 21:03:41.025572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.505µs"
	
	
	==> kube-controller-manager [41aa0b241e9bd9b80cd76d1e268c444e31c9eb9259e2ab90b4b683c9b171efef] <==
	I0318 21:00:24.623138       1 serving.go:348] Generated self-signed cert in-memory
	I0318 21:00:24.795515       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 21:00:24.795568       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:00:24.796875       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 21:00:24.797101       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 21:00:24.797439       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 21:00:24.797929       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0318 21:00:34.976143       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [93d601359a854749551ef02d8f1e3c61027b367a8abc6d4666d4776cd011dec7] <==
	I0318 21:00:33.252251       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 21:00:33.252321       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:00:33.255296       1 server_others.go:152] "Using iptables Proxier"
	I0318 21:00:33.255439       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:00:33.255804       1 server.go:846] "Version info" version="v1.28.4"
	I0318 21:00:33.256161       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:00:33.257994       1 config.go:188] "Starting service config controller"
	I0318 21:00:33.258226       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:00:33.258333       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:00:33.258360       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:00:33.259220       1 config.go:315] "Starting node config controller"
	I0318 21:00:33.259614       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0318 21:00:36.261450       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0318 21:00:36.261746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:36.265958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 21:00:36.262144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:36.266092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 21:00:36.262264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 21:00:36.266166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 21:00:37.462881       1 shared_informer.go:318] Caches are synced for node config
	I0318 21:00:37.860152       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:00:37.959510       1 shared_informer.go:318] Caches are synced for endpoint slice config
	W0318 21:03:50.821945       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0318 21:03:50.821956       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0318 21:03:50.822161       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [df303842f5387f6f90a5ebef936952f099b061124647a20c2e2b635342f1221a] <==
	E0318 20:56:33.760738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:33.760552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:33.760900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:42.144651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:42.144728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:42.144650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:42.144752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:42.144864       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:42.144914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:55.073524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:55.073849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:55.074244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:55.074372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:56:55.074521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:56:55.074576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:10.433543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:10.433670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:13.505737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:13.505807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:13.505742       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:13.505844       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:44.229455       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:44.229674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-315064&resourceVersion=1827": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 20:57:59.586237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 20:57:59.586975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1a42f9c834d0e8ea7290a6cd8fa094bd8455647e4a868eadad309f2e6f2b4e62] <==
	E0318 20:57:53.434839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 20:57:53.813687       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 20:57:53.813741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 20:57:53.824675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 20:57:53.824749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 20:57:53.962557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 20:57:53.962660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 20:57:54.386941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 20:57:54.387086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 20:57:54.488944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 20:57:54.489089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 20:57:54.653222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 20:57:54.653374       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 20:57:54.705367       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 20:57:54.705458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 20:57:54.786211       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 20:57:54.786238       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 20:57:54.906322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 20:57:54.906413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 20:57:59.960893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 20:57:59.960988       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0318 20:58:00.185004       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 20:58:00.185228       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 20:58:00.185417       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0318 20:58:00.185589       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7de86ecadf357f4a4df3d8543b10b7b39158d1ee1736fcd2731c4d85ba52701] <==
	W0318 21:00:29.121632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.79:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:29.121699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.79:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:29.195713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:29.195792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:30.849327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:30.849425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:31.232870       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:31.232941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:31.566400       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.79:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:31.566470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.79:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:31.850133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:31.850204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:32.508767       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.79:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0318 21:00:32.508848       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.79:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	W0318 21:00:34.965755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 21:00:34.969243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 21:00:34.968618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 21:00:34.969364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 21:00:34.968652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 21:00:34.969454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 21:00:34.968732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 21:00:34.969503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 21:00:34.968815       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 21:00:34.969549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 21:00:47.854685       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 21:01:07 ha-315064 kubelet[1363]: E0318 21:01:07.734463    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:01:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:01:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:01:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:01:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:02:07 ha-315064 kubelet[1363]: E0318 21:02:07.736095    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:02:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:02:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:02:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:02:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:03:07 ha-315064 kubelet[1363]: E0318 21:03:07.736207    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:03:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:03:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:03:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:03:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:04:07 ha-315064 kubelet[1363]: E0318 21:04:07.738332    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:04:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:04:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:04:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:04:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:05:07 ha-315064 kubelet[1363]: E0318 21:05:07.735307    1363 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:05:07 ha-315064 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:05:07 ha-315064 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:05:07 ha-315064 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:05:07 ha-315064 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:05:25.133129   29602 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18421-5321/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-315064 -n ha-315064
helpers_test.go:261: (dbg) Run:  kubectl --context ha-315064 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-119391
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-119391
E0318 21:20:14.158028   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:20:23.236484   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-119391: exit status 82 (2m2.705558596s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-119391-m03"  ...
	* Stopping node "multinode-119391-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-119391" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119391 --wait=true -v=8 --alsologtostderr
E0318 21:23:26.281487   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119391 --wait=true -v=8 --alsologtostderr: (3m7.263085952s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-119391
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-119391 -n multinode-119391
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-119391 logs -n 25: (1.738198926s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2904894668/001/cp-test_multinode-119391-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391:/home/docker/cp-test_multinode-119391-m02_multinode-119391.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391 sudo cat                                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m02_multinode-119391.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03:/home/docker/cp-test_multinode-119391-m02_multinode-119391-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391-m03 sudo cat                                   | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m02_multinode-119391-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp testdata/cp-test.txt                                                | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2904894668/001/cp-test_multinode-119391-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391:/home/docker/cp-test_multinode-119391-m03_multinode-119391.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391 sudo cat                                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m03_multinode-119391.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02:/home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391-m02 sudo cat                                   | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-119391 node stop m03                                                          | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	| node    | multinode-119391 node start                                                             | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:19 UTC |                     |
	| stop    | -p multinode-119391                                                                     | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:19 UTC |                     |
	| start   | -p multinode-119391                                                                     | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:21 UTC | 18 Mar 24 21:24 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:21:23.660930   38073 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:21:23.661047   38073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:21:23.661056   38073 out.go:304] Setting ErrFile to fd 2...
	I0318 21:21:23.661060   38073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:21:23.661222   38073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:21:23.661714   38073 out.go:298] Setting JSON to false
	I0318 21:21:23.662563   38073 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3828,"bootTime":1710793056,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:21:23.662615   38073 start.go:139] virtualization: kvm guest
	I0318 21:21:23.665135   38073 out.go:177] * [multinode-119391] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:21:23.666571   38073 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:21:23.666576   38073 notify.go:220] Checking for updates...
	I0318 21:21:23.667976   38073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:21:23.669442   38073 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:21:23.670643   38073 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:21:23.671885   38073 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:21:23.673235   38073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:21:23.675215   38073 config.go:182] Loaded profile config "multinode-119391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:21:23.675336   38073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:21:23.675890   38073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:21:23.675947   38073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:21:23.692687   38073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0318 21:21:23.693080   38073 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:21:23.693675   38073 main.go:141] libmachine: Using API Version  1
	I0318 21:21:23.693710   38073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:21:23.694075   38073 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:21:23.694276   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:21:23.726767   38073 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:21:23.727913   38073 start.go:297] selected driver: kvm2
	I0318 21:21:23.727924   38073 start.go:901] validating driver "kvm2" against &{Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:21:23.728060   38073 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:21:23.728372   38073 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:21:23.728443   38073 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:21:23.743257   38073 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:21:23.743865   38073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:21:23.743935   38073 cni.go:84] Creating CNI manager for ""
	I0318 21:21:23.743949   38073 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 21:21:23.744002   38073 start.go:340] cluster config:
	{Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-119391 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:21:23.744129   38073 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:21:23.745747   38073 out.go:177] * Starting "multinode-119391" primary control-plane node in "multinode-119391" cluster
	I0318 21:21:23.746929   38073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:21:23.746960   38073 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 21:21:23.746973   38073 cache.go:56] Caching tarball of preloaded images
	I0318 21:21:23.747042   38073 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 21:21:23.747054   38073 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 21:21:23.747182   38073 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/config.json ...
	I0318 21:21:23.747373   38073 start.go:360] acquireMachinesLock for multinode-119391: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:21:23.747444   38073 start.go:364] duration metric: took 52.931µs to acquireMachinesLock for "multinode-119391"
	I0318 21:21:23.747462   38073 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:21:23.747473   38073 fix.go:54] fixHost starting: 
	I0318 21:21:23.747718   38073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:21:23.747754   38073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:21:23.760690   38073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0318 21:21:23.761066   38073 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:21:23.761541   38073 main.go:141] libmachine: Using API Version  1
	I0318 21:21:23.761578   38073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:21:23.761890   38073 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:21:23.762075   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:21:23.762210   38073 main.go:141] libmachine: (multinode-119391) Calling .GetState
	I0318 21:21:23.763698   38073 fix.go:112] recreateIfNeeded on multinode-119391: state=Running err=<nil>
	W0318 21:21:23.763712   38073 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:21:23.765508   38073 out.go:177] * Updating the running kvm2 "multinode-119391" VM ...
	I0318 21:21:23.766624   38073 machine.go:94] provisionDockerMachine start ...
	I0318 21:21:23.766638   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:21:23.766814   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:23.769014   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.769434   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:23.769463   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.769550   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:23.769668   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.769813   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.769915   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:23.770040   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:23.770252   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:23.770266   38073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:21:23.882968   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-119391
	
	I0318 21:21:23.882995   38073 main.go:141] libmachine: (multinode-119391) Calling .GetMachineName
	I0318 21:21:23.883239   38073 buildroot.go:166] provisioning hostname "multinode-119391"
	I0318 21:21:23.883269   38073 main.go:141] libmachine: (multinode-119391) Calling .GetMachineName
	I0318 21:21:23.883467   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:23.886313   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.886728   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:23.886747   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.886913   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:23.887101   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.887242   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.887344   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:23.887476   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:23.887661   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:23.887677   38073 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-119391 && echo "multinode-119391" | sudo tee /etc/hostname
	I0318 21:21:24.012201   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-119391
	
	I0318 21:21:24.012223   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.014918   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.015255   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.015295   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.015420   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:24.015613   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.015775   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.015917   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:24.016071   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:24.016224   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:24.016240   38073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-119391' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-119391/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-119391' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:21:24.126855   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:21:24.126893   38073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:21:24.126912   38073 buildroot.go:174] setting up certificates
	I0318 21:21:24.126932   38073 provision.go:84] configureAuth start
	I0318 21:21:24.126943   38073 main.go:141] libmachine: (multinode-119391) Calling .GetMachineName
	I0318 21:21:24.127210   38073 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:21:24.129854   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.130230   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.130259   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.130389   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.132318   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.132669   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.132711   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.132845   38073 provision.go:143] copyHostCerts
	I0318 21:21:24.132878   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:21:24.132929   38073 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:21:24.132941   38073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:21:24.133019   38073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:21:24.133127   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:21:24.133162   38073 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:21:24.133172   38073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:21:24.133213   38073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:21:24.133267   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:21:24.133290   38073 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:21:24.133299   38073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:21:24.133330   38073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:21:24.133397   38073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.multinode-119391 san=[127.0.0.1 192.168.39.127 localhost minikube multinode-119391]
	I0318 21:21:24.228063   38073 provision.go:177] copyRemoteCerts
	I0318 21:21:24.228111   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:21:24.228130   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.230664   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.231008   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.231034   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.231217   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:24.231403   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.231562   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:24.231685   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:21:24.321664   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 21:21:24.321726   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:21:24.350602   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 21:21:24.350664   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 21:21:24.378769   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 21:21:24.378819   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:21:24.407470   38073 provision.go:87] duration metric: took 280.526167ms to configureAuth
	I0318 21:21:24.407503   38073 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:21:24.407734   38073 config.go:182] Loaded profile config "multinode-119391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:21:24.407820   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.410404   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.410861   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.410890   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.411062   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:24.411245   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.411418   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.411581   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:24.411731   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:24.411927   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:24.411962   38073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:22:55.260329   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:22:55.260362   38073 machine.go:97] duration metric: took 1m31.493724827s to provisionDockerMachine
	I0318 21:22:55.260379   38073 start.go:293] postStartSetup for "multinode-119391" (driver="kvm2")
	I0318 21:22:55.260393   38073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:22:55.260414   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.260740   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:22:55.260777   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.263473   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.263976   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.263995   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.264214   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.264402   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.264559   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.264703   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:22:55.354158   38073 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:22:55.358990   38073 command_runner.go:130] > NAME=Buildroot
	I0318 21:22:55.359009   38073 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 21:22:55.359015   38073 command_runner.go:130] > ID=buildroot
	I0318 21:22:55.359022   38073 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 21:22:55.359029   38073 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 21:22:55.359065   38073 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:22:55.359080   38073 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:22:55.359129   38073 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:22:55.359204   38073 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:22:55.359213   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 21:22:55.359292   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:22:55.370269   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:22:55.396249   38073 start.go:296] duration metric: took 135.859934ms for postStartSetup
	I0318 21:22:55.396299   38073 fix.go:56] duration metric: took 1m31.648812988s for fixHost
	I0318 21:22:55.396323   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.398973   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.399358   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.399380   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.399560   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.399772   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.399984   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.400137   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.400280   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:22:55.400442   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:22:55.400452   38073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:22:55.505733   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710796975.479934840
	
	I0318 21:22:55.505756   38073 fix.go:216] guest clock: 1710796975.479934840
	I0318 21:22:55.505767   38073 fix.go:229] Guest: 2024-03-18 21:22:55.47993484 +0000 UTC Remote: 2024-03-18 21:22:55.396305072 +0000 UTC m=+91.781649066 (delta=83.629768ms)
	I0318 21:22:55.505798   38073 fix.go:200] guest clock delta is within tolerance: 83.629768ms
	I0318 21:22:55.505809   38073 start.go:83] releasing machines lock for "multinode-119391", held for 1m31.758354331s
	I0318 21:22:55.505857   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.506088   38073 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:22:55.508766   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.509179   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.509205   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.509336   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.509814   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.509997   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.510090   38073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:22:55.510138   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.510179   38073 ssh_runner.go:195] Run: cat /version.json
	I0318 21:22:55.510200   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.512364   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.512669   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.512696   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.512715   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.512868   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.513038   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.513175   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.513204   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.513222   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.513389   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.513395   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:22:55.513533   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.513654   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.513802   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:22:55.598217   38073 command_runner.go:130] > {"iso_version": "v1.32.1-1710573846-18277", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "c68f4945cc664fefa1b332c623244b57043707c8"}
	I0318 21:22:55.598355   38073 ssh_runner.go:195] Run: systemctl --version
	I0318 21:22:55.621246   38073 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 21:22:55.621889   38073 command_runner.go:130] > systemd 252 (252)
	I0318 21:22:55.621937   38073 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 21:22:55.622003   38073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:22:55.784038   38073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 21:22:55.791214   38073 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 21:22:55.791262   38073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:22:55.791324   38073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:22:55.801079   38073 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 21:22:55.801097   38073 start.go:494] detecting cgroup driver to use...
	I0318 21:22:55.801152   38073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:22:55.819251   38073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:22:55.833420   38073 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:22:55.833456   38073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:22:55.847697   38073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:22:55.861655   38073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:22:56.029630   38073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:22:56.174607   38073 docker.go:233] disabling docker service ...
	I0318 21:22:56.174679   38073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:22:56.192806   38073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:22:56.208211   38073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:22:56.349922   38073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:22:56.488350   38073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:22:56.504512   38073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:22:56.525863   38073 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0318 21:22:56.525937   38073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:22:56.525992   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.538923   38073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:22:56.539002   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.552106   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.566668   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.580278   38073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:22:56.593378   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.607467   38073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.619477   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.632173   38073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:22:56.643383   38073 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 21:22:56.643545   38073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:22:56.655572   38073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:22:56.796783   38073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:22:58.533986   38073 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.737166363s)
	I0318 21:22:58.534012   38073 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:22:58.534053   38073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:22:58.540169   38073 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0318 21:22:58.540200   38073 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 21:22:58.540211   38073 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0318 21:22:58.540222   38073 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 21:22:58.540233   38073 command_runner.go:130] > Access: 2024-03-18 21:22:58.398889373 +0000
	I0318 21:22:58.540244   38073 command_runner.go:130] > Modify: 2024-03-18 21:22:58.398889373 +0000
	I0318 21:22:58.540256   38073 command_runner.go:130] > Change: 2024-03-18 21:22:58.398889373 +0000
	I0318 21:22:58.540265   38073 command_runner.go:130] >  Birth: -
	I0318 21:22:58.540289   38073 start.go:562] Will wait 60s for crictl version
	I0318 21:22:58.540331   38073 ssh_runner.go:195] Run: which crictl
	I0318 21:22:58.544738   38073 command_runner.go:130] > /usr/bin/crictl
	I0318 21:22:58.544816   38073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:22:58.583883   38073 command_runner.go:130] > Version:  0.1.0
	I0318 21:22:58.583901   38073 command_runner.go:130] > RuntimeName:  cri-o
	I0318 21:22:58.583906   38073 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0318 21:22:58.583911   38073 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 21:22:58.584031   38073 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:22:58.584111   38073 ssh_runner.go:195] Run: crio --version
	I0318 21:22:58.616790   38073 command_runner.go:130] > crio version 1.29.1
	I0318 21:22:58.616808   38073 command_runner.go:130] > Version:        1.29.1
	I0318 21:22:58.616813   38073 command_runner.go:130] > GitCommit:      unknown
	I0318 21:22:58.616817   38073 command_runner.go:130] > GitCommitDate:  unknown
	I0318 21:22:58.616822   38073 command_runner.go:130] > GitTreeState:   clean
	I0318 21:22:58.616834   38073 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0318 21:22:58.616838   38073 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 21:22:58.616842   38073 command_runner.go:130] > Compiler:       gc
	I0318 21:22:58.616847   38073 command_runner.go:130] > Platform:       linux/amd64
	I0318 21:22:58.616850   38073 command_runner.go:130] > Linkmode:       dynamic
	I0318 21:22:58.616855   38073 command_runner.go:130] > BuildTags:      
	I0318 21:22:58.616860   38073 command_runner.go:130] >   containers_image_ostree_stub
	I0318 21:22:58.616865   38073 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 21:22:58.616874   38073 command_runner.go:130] >   btrfs_noversion
	I0318 21:22:58.616884   38073 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 21:22:58.616894   38073 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 21:22:58.616913   38073 command_runner.go:130] >   seccomp
	I0318 21:22:58.616921   38073 command_runner.go:130] > LDFlags:          unknown
	I0318 21:22:58.616928   38073 command_runner.go:130] > SeccompEnabled:   true
	I0318 21:22:58.616933   38073 command_runner.go:130] > AppArmorEnabled:  false
	I0318 21:22:58.617017   38073 ssh_runner.go:195] Run: crio --version
	I0318 21:22:58.647621   38073 command_runner.go:130] > crio version 1.29.1
	I0318 21:22:58.647639   38073 command_runner.go:130] > Version:        1.29.1
	I0318 21:22:58.647644   38073 command_runner.go:130] > GitCommit:      unknown
	I0318 21:22:58.647649   38073 command_runner.go:130] > GitCommitDate:  unknown
	I0318 21:22:58.647653   38073 command_runner.go:130] > GitTreeState:   clean
	I0318 21:22:58.647658   38073 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0318 21:22:58.647662   38073 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 21:22:58.647666   38073 command_runner.go:130] > Compiler:       gc
	I0318 21:22:58.647670   38073 command_runner.go:130] > Platform:       linux/amd64
	I0318 21:22:58.647674   38073 command_runner.go:130] > Linkmode:       dynamic
	I0318 21:22:58.647680   38073 command_runner.go:130] > BuildTags:      
	I0318 21:22:58.647684   38073 command_runner.go:130] >   containers_image_ostree_stub
	I0318 21:22:58.647688   38073 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 21:22:58.647692   38073 command_runner.go:130] >   btrfs_noversion
	I0318 21:22:58.647696   38073 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 21:22:58.647700   38073 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 21:22:58.647706   38073 command_runner.go:130] >   seccomp
	I0318 21:22:58.647710   38073 command_runner.go:130] > LDFlags:          unknown
	I0318 21:22:58.647717   38073 command_runner.go:130] > SeccompEnabled:   true
	I0318 21:22:58.647723   38073 command_runner.go:130] > AppArmorEnabled:  false
	I0318 21:22:58.649999   38073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:22:58.651496   38073 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:22:58.654158   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:58.654520   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:58.654541   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:58.654714   38073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:22:58.659207   38073 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0318 21:22:58.659513   38073 kubeadm.go:877] updating cluster {Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:22:58.659662   38073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:22:58.659714   38073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:22:58.710807   38073 command_runner.go:130] > {
	I0318 21:22:58.710830   38073 command_runner.go:130] >   "images": [
	I0318 21:22:58.710837   38073 command_runner.go:130] >     {
	I0318 21:22:58.710849   38073 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 21:22:58.710859   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.710868   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 21:22:58.710874   38073 command_runner.go:130] >       ],
	I0318 21:22:58.710880   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.710894   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 21:22:58.710910   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 21:22:58.710914   38073 command_runner.go:130] >       ],
	I0318 21:22:58.710925   38073 command_runner.go:130] >       "size": "65258016",
	I0318 21:22:58.710931   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.710940   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.710947   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.710956   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.710960   38073 command_runner.go:130] >     },
	I0318 21:22:58.710965   38073 command_runner.go:130] >     {
	I0318 21:22:58.710976   38073 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 21:22:58.710988   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.710999   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 21:22:58.711007   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711012   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711025   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 21:22:58.711038   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 21:22:58.711046   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711055   38073 command_runner.go:130] >       "size": "65291810",
	I0318 21:22:58.711061   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711074   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711082   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711088   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711096   38073 command_runner.go:130] >     },
	I0318 21:22:58.711101   38073 command_runner.go:130] >     {
	I0318 21:22:58.711113   38073 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 21:22:58.711122   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711130   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 21:22:58.711138   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711147   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711158   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 21:22:58.711171   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 21:22:58.711177   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711185   38073 command_runner.go:130] >       "size": "1363676",
	I0318 21:22:58.711194   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711199   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711209   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711215   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711223   38073 command_runner.go:130] >     },
	I0318 21:22:58.711229   38073 command_runner.go:130] >     {
	I0318 21:22:58.711242   38073 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 21:22:58.711258   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711269   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 21:22:58.711275   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711284   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711296   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 21:22:58.711323   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 21:22:58.711339   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711346   38073 command_runner.go:130] >       "size": "31470524",
	I0318 21:22:58.711356   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711362   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711368   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711374   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711382   38073 command_runner.go:130] >     },
	I0318 21:22:58.711387   38073 command_runner.go:130] >     {
	I0318 21:22:58.711399   38073 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 21:22:58.711406   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711421   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 21:22:58.711429   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711436   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711450   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 21:22:58.711463   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 21:22:58.711471   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711478   38073 command_runner.go:130] >       "size": "53621675",
	I0318 21:22:58.711486   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711493   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711502   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711509   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711518   38073 command_runner.go:130] >     },
	I0318 21:22:58.711525   38073 command_runner.go:130] >     {
	I0318 21:22:58.711537   38073 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 21:22:58.711545   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711555   38073 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 21:22:58.711562   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711569   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711583   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 21:22:58.711597   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 21:22:58.711605   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711611   38073 command_runner.go:130] >       "size": "295456551",
	I0318 21:22:58.711619   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.711625   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.711632   38073 command_runner.go:130] >       },
	I0318 21:22:58.711638   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711653   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711660   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711665   38073 command_runner.go:130] >     },
	I0318 21:22:58.711673   38073 command_runner.go:130] >     {
	I0318 21:22:58.711683   38073 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 21:22:58.711692   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711700   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 21:22:58.711710   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711720   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711731   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 21:22:58.711745   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 21:22:58.711753   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711759   38073 command_runner.go:130] >       "size": "127226832",
	I0318 21:22:58.711768   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.711774   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.711783   38073 command_runner.go:130] >       },
	I0318 21:22:58.711789   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711798   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711804   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711811   38073 command_runner.go:130] >     },
	I0318 21:22:58.711824   38073 command_runner.go:130] >     {
	I0318 21:22:58.711837   38073 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 21:22:58.711844   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711852   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 21:22:58.711860   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711867   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711898   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 21:22:58.711916   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 21:22:58.711923   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711933   38073 command_runner.go:130] >       "size": "123261750",
	I0318 21:22:58.711938   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.711944   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.711953   38073 command_runner.go:130] >       },
	I0318 21:22:58.711959   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711964   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711971   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711980   38073 command_runner.go:130] >     },
	I0318 21:22:58.711985   38073 command_runner.go:130] >     {
	I0318 21:22:58.711995   38073 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 21:22:58.712000   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.712012   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 21:22:58.712017   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712030   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.712041   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 21:22:58.712052   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 21:22:58.712062   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712068   38073 command_runner.go:130] >       "size": "74749335",
	I0318 21:22:58.712078   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.712083   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.712093   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.712099   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.712105   38073 command_runner.go:130] >     },
	I0318 21:22:58.712110   38073 command_runner.go:130] >     {
	I0318 21:22:58.712119   38073 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 21:22:58.712129   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.712139   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 21:22:58.712145   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712154   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.712169   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 21:22:58.712183   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 21:22:58.712189   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712196   38073 command_runner.go:130] >       "size": "61551410",
	I0318 21:22:58.712205   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.712211   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.712219   38073 command_runner.go:130] >       },
	I0318 21:22:58.712225   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.712233   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.712240   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.712249   38073 command_runner.go:130] >     },
	I0318 21:22:58.712259   38073 command_runner.go:130] >     {
	I0318 21:22:58.712271   38073 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 21:22:58.712280   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.712293   38073 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 21:22:58.712301   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712307   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.712321   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 21:22:58.712334   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 21:22:58.712343   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712350   38073 command_runner.go:130] >       "size": "750414",
	I0318 21:22:58.712359   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.712365   38073 command_runner.go:130] >         "value": "65535"
	I0318 21:22:58.712371   38073 command_runner.go:130] >       },
	I0318 21:22:58.712374   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.712379   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.712385   38073 command_runner.go:130] >       "pinned": true
	I0318 21:22:58.712388   38073 command_runner.go:130] >     }
	I0318 21:22:58.712391   38073 command_runner.go:130] >   ]
	I0318 21:22:58.712394   38073 command_runner.go:130] > }
	I0318 21:22:58.712619   38073 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:22:58.712633   38073 crio.go:433] Images already preloaded, skipping extraction
	I0318 21:22:58.712673   38073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:22:58.750102   38073 command_runner.go:130] > {
	I0318 21:22:58.750124   38073 command_runner.go:130] >   "images": [
	I0318 21:22:58.750131   38073 command_runner.go:130] >     {
	I0318 21:22:58.750140   38073 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 21:22:58.750152   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750160   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 21:22:58.750175   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750181   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750202   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 21:22:58.750214   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 21:22:58.750218   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750224   38073 command_runner.go:130] >       "size": "65258016",
	I0318 21:22:58.750231   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750235   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750248   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750258   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750267   38073 command_runner.go:130] >     },
	I0318 21:22:58.750276   38073 command_runner.go:130] >     {
	I0318 21:22:58.750289   38073 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 21:22:58.750298   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750310   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 21:22:58.750316   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750320   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750329   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 21:22:58.750344   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 21:22:58.750359   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750366   38073 command_runner.go:130] >       "size": "65291810",
	I0318 21:22:58.750375   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750390   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750399   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750409   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750418   38073 command_runner.go:130] >     },
	I0318 21:22:58.750425   38073 command_runner.go:130] >     {
	I0318 21:22:58.750432   38073 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 21:22:58.750440   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750455   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 21:22:58.750474   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750485   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750500   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 21:22:58.750515   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 21:22:58.750524   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750533   38073 command_runner.go:130] >       "size": "1363676",
	I0318 21:22:58.750538   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750543   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750552   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750562   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750568   38073 command_runner.go:130] >     },
	I0318 21:22:58.750577   38073 command_runner.go:130] >     {
	I0318 21:22:58.750590   38073 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 21:22:58.750599   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750610   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 21:22:58.750618   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750627   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750638   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 21:22:58.750662   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 21:22:58.750672   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750679   38073 command_runner.go:130] >       "size": "31470524",
	I0318 21:22:58.750688   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750698   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750708   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750717   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750727   38073 command_runner.go:130] >     },
	I0318 21:22:58.750735   38073 command_runner.go:130] >     {
	I0318 21:22:58.750746   38073 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 21:22:58.750755   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750766   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 21:22:58.750775   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750783   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750798   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 21:22:58.750813   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 21:22:58.750821   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750831   38073 command_runner.go:130] >       "size": "53621675",
	I0318 21:22:58.750846   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750855   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750864   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750874   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750880   38073 command_runner.go:130] >     },
	I0318 21:22:58.750889   38073 command_runner.go:130] >     {
	I0318 21:22:58.750902   38073 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 21:22:58.750911   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750922   38073 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 21:22:58.750931   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750940   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750964   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 21:22:58.750979   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 21:22:58.750990   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750999   38073 command_runner.go:130] >       "size": "295456551",
	I0318 21:22:58.751008   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751018   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751027   38073 command_runner.go:130] >       },
	I0318 21:22:58.751037   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751045   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751049   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751057   38073 command_runner.go:130] >     },
	I0318 21:22:58.751069   38073 command_runner.go:130] >     {
	I0318 21:22:58.751082   38073 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 21:22:58.751092   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751104   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 21:22:58.751113   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751123   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751138   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 21:22:58.751151   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 21:22:58.751158   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751165   38073 command_runner.go:130] >       "size": "127226832",
	I0318 21:22:58.751174   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751182   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751191   38073 command_runner.go:130] >       },
	I0318 21:22:58.751201   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751217   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751226   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751235   38073 command_runner.go:130] >     },
	I0318 21:22:58.751243   38073 command_runner.go:130] >     {
	I0318 21:22:58.751249   38073 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 21:22:58.751258   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751269   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 21:22:58.751279   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751288   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751317   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 21:22:58.751333   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 21:22:58.751337   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751351   38073 command_runner.go:130] >       "size": "123261750",
	I0318 21:22:58.751361   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751368   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751377   38073 command_runner.go:130] >       },
	I0318 21:22:58.751386   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751395   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751404   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751412   38073 command_runner.go:130] >     },
	I0318 21:22:58.751418   38073 command_runner.go:130] >     {
	I0318 21:22:58.751431   38073 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 21:22:58.751441   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751449   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 21:22:58.751458   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751464   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751476   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 21:22:58.751491   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 21:22:58.751499   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751505   38073 command_runner.go:130] >       "size": "74749335",
	I0318 21:22:58.751514   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.751530   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751539   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751546   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751555   38073 command_runner.go:130] >     },
	I0318 21:22:58.751560   38073 command_runner.go:130] >     {
	I0318 21:22:58.751579   38073 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 21:22:58.751589   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751598   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 21:22:58.751606   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751614   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751628   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 21:22:58.751643   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 21:22:58.751651   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751661   38073 command_runner.go:130] >       "size": "61551410",
	I0318 21:22:58.751670   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751677   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751686   38073 command_runner.go:130] >       },
	I0318 21:22:58.751695   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751703   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751712   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751720   38073 command_runner.go:130] >     },
	I0318 21:22:58.751729   38073 command_runner.go:130] >     {
	I0318 21:22:58.751739   38073 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 21:22:58.751749   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751760   38073 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 21:22:58.751766   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751776   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751791   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 21:22:58.751805   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 21:22:58.751814   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751822   38073 command_runner.go:130] >       "size": "750414",
	I0318 21:22:58.751826   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751836   38073 command_runner.go:130] >         "value": "65535"
	I0318 21:22:58.751845   38073 command_runner.go:130] >       },
	I0318 21:22:58.751855   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751865   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751874   38073 command_runner.go:130] >       "pinned": true
	I0318 21:22:58.751882   38073 command_runner.go:130] >     }
	I0318 21:22:58.751890   38073 command_runner.go:130] >   ]
	I0318 21:22:58.751895   38073 command_runner.go:130] > }
	I0318 21:22:58.752109   38073 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:22:58.752126   38073 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:22:58.752140   38073 kubeadm.go:928] updating node { 192.168.39.127 8443 v1.28.4 crio true true} ...
	I0318 21:22:58.752303   38073 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-119391 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:22:58.752396   38073 ssh_runner.go:195] Run: crio config
	I0318 21:22:58.805509   38073 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0318 21:22:58.805544   38073 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0318 21:22:58.805555   38073 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0318 21:22:58.805560   38073 command_runner.go:130] > #
	I0318 21:22:58.805571   38073 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0318 21:22:58.805582   38073 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0318 21:22:58.805598   38073 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0318 21:22:58.805609   38073 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0318 21:22:58.805618   38073 command_runner.go:130] > # reload'.
	I0318 21:22:58.805627   38073 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0318 21:22:58.805641   38073 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0318 21:22:58.805655   38073 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0318 21:22:58.805665   38073 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0318 21:22:58.805673   38073 command_runner.go:130] > [crio]
	I0318 21:22:58.805682   38073 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0318 21:22:58.805694   38073 command_runner.go:130] > # containers images, in this directory.
	I0318 21:22:58.805702   38073 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0318 21:22:58.805721   38073 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0318 21:22:58.805731   38073 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0318 21:22:58.805743   38073 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0318 21:22:58.805753   38073 command_runner.go:130] > # imagestore = ""
	I0318 21:22:58.805762   38073 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0318 21:22:58.805775   38073 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0318 21:22:58.805786   38073 command_runner.go:130] > storage_driver = "overlay"
	I0318 21:22:58.805795   38073 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0318 21:22:58.805808   38073 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0318 21:22:58.805818   38073 command_runner.go:130] > storage_option = [
	I0318 21:22:58.805829   38073 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0318 21:22:58.805838   38073 command_runner.go:130] > ]
	I0318 21:22:58.805848   38073 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0318 21:22:58.805861   38073 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0318 21:22:58.805878   38073 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0318 21:22:58.805890   38073 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0318 21:22:58.805903   38073 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0318 21:22:58.805913   38073 command_runner.go:130] > # always happen on a node reboot
	I0318 21:22:58.805929   38073 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0318 21:22:58.805949   38073 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0318 21:22:58.805968   38073 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0318 21:22:58.805979   38073 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0318 21:22:58.805990   38073 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0318 21:22:58.806006   38073 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0318 21:22:58.806023   38073 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0318 21:22:58.806033   38073 command_runner.go:130] > # internal_wipe = true
	I0318 21:22:58.806045   38073 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0318 21:22:58.806057   38073 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0318 21:22:58.806067   38073 command_runner.go:130] > # internal_repair = false
	I0318 21:22:58.806074   38073 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0318 21:22:58.806086   38073 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0318 21:22:58.806098   38073 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0318 21:22:58.806106   38073 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0318 21:22:58.806118   38073 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0318 21:22:58.806126   38073 command_runner.go:130] > [crio.api]
	I0318 21:22:58.806134   38073 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0318 21:22:58.806145   38073 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0318 21:22:58.806155   38073 command_runner.go:130] > # IP address on which the stream server will listen.
	I0318 21:22:58.806162   38073 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0318 21:22:58.806175   38073 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0318 21:22:58.806185   38073 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0318 21:22:58.806196   38073 command_runner.go:130] > # stream_port = "0"
	I0318 21:22:58.806206   38073 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0318 21:22:58.806216   38073 command_runner.go:130] > # stream_enable_tls = false
	I0318 21:22:58.806224   38073 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0318 21:22:58.806234   38073 command_runner.go:130] > # stream_idle_timeout = ""
	I0318 21:22:58.806243   38073 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0318 21:22:58.806255   38073 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0318 21:22:58.806263   38073 command_runner.go:130] > # minutes.
	I0318 21:22:58.806269   38073 command_runner.go:130] > # stream_tls_cert = ""
	I0318 21:22:58.806288   38073 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0318 21:22:58.806301   38073 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0318 21:22:58.806310   38073 command_runner.go:130] > # stream_tls_key = ""
	I0318 21:22:58.806319   38073 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0318 21:22:58.806335   38073 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0318 21:22:58.806369   38073 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0318 21:22:58.806379   38073 command_runner.go:130] > # stream_tls_ca = ""
	I0318 21:22:58.806390   38073 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 21:22:58.806400   38073 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0318 21:22:58.806411   38073 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 21:22:58.806422   38073 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0318 21:22:58.806447   38073 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0318 21:22:58.806460   38073 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0318 21:22:58.806469   38073 command_runner.go:130] > [crio.runtime]
	I0318 21:22:58.806479   38073 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0318 21:22:58.806491   38073 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0318 21:22:58.806502   38073 command_runner.go:130] > # "nofile=1024:2048"
	I0318 21:22:58.806515   38073 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0318 21:22:58.806524   38073 command_runner.go:130] > # default_ulimits = [
	I0318 21:22:58.806532   38073 command_runner.go:130] > # ]
	I0318 21:22:58.806546   38073 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0318 21:22:58.806555   38073 command_runner.go:130] > # no_pivot = false
	I0318 21:22:58.806568   38073 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0318 21:22:58.806582   38073 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0318 21:22:58.806592   38073 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0318 21:22:58.806600   38073 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0318 21:22:58.806611   38073 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0318 21:22:58.806624   38073 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 21:22:58.806634   38073 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0318 21:22:58.806640   38073 command_runner.go:130] > # Cgroup setting for conmon
	I0318 21:22:58.806653   38073 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0318 21:22:58.806660   38073 command_runner.go:130] > conmon_cgroup = "pod"
	I0318 21:22:58.806673   38073 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0318 21:22:58.806684   38073 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0318 21:22:58.806698   38073 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 21:22:58.806707   38073 command_runner.go:130] > conmon_env = [
	I0318 21:22:58.806733   38073 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 21:22:58.806742   38073 command_runner.go:130] > ]
	I0318 21:22:58.806750   38073 command_runner.go:130] > # Additional environment variables to set for all the
	I0318 21:22:58.806761   38073 command_runner.go:130] > # containers. These are overridden if set in the
	I0318 21:22:58.806774   38073 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0318 21:22:58.806786   38073 command_runner.go:130] > # default_env = [
	I0318 21:22:58.806794   38073 command_runner.go:130] > # ]
	I0318 21:22:58.806803   38073 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0318 21:22:58.806819   38073 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0318 21:22:58.806828   38073 command_runner.go:130] > # selinux = false
	I0318 21:22:58.806839   38073 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0318 21:22:58.806852   38073 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0318 21:22:58.806861   38073 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0318 21:22:58.806871   38073 command_runner.go:130] > # seccomp_profile = ""
	I0318 21:22:58.806879   38073 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0318 21:22:58.806895   38073 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0318 21:22:58.806908   38073 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0318 21:22:58.806918   38073 command_runner.go:130] > # which might increase security.
	I0318 21:22:58.806925   38073 command_runner.go:130] > # This option is currently deprecated,
	I0318 21:22:58.806937   38073 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0318 21:22:58.806947   38073 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0318 21:22:58.806957   38073 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0318 21:22:58.806970   38073 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0318 21:22:58.806981   38073 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0318 21:22:58.806994   38073 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0318 21:22:58.807005   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.807012   38073 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0318 21:22:58.807020   38073 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0318 21:22:58.807030   38073 command_runner.go:130] > # the cgroup blockio controller.
	I0318 21:22:58.807036   38073 command_runner.go:130] > # blockio_config_file = ""
	I0318 21:22:58.807047   38073 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0318 21:22:58.807056   38073 command_runner.go:130] > # blockio parameters.
	I0318 21:22:58.807065   38073 command_runner.go:130] > # blockio_reload = false
	I0318 21:22:58.807079   38073 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0318 21:22:58.807085   38073 command_runner.go:130] > # irqbalance daemon.
	I0318 21:22:58.807094   38073 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0318 21:22:58.807114   38073 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0318 21:22:58.807128   38073 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0318 21:22:58.807141   38073 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0318 21:22:58.807153   38073 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0318 21:22:58.807165   38073 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0318 21:22:58.807175   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.807190   38073 command_runner.go:130] > # rdt_config_file = ""
	I0318 21:22:58.807201   38073 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0318 21:22:58.807210   38073 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0318 21:22:58.807276   38073 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0318 21:22:58.807292   38073 command_runner.go:130] > # separate_pull_cgroup = ""
	I0318 21:22:58.807303   38073 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0318 21:22:58.807317   38073 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0318 21:22:58.807326   38073 command_runner.go:130] > # will be added.
	I0318 21:22:58.807333   38073 command_runner.go:130] > # default_capabilities = [
	I0318 21:22:58.807342   38073 command_runner.go:130] > # 	"CHOWN",
	I0318 21:22:58.807348   38073 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0318 21:22:58.807361   38073 command_runner.go:130] > # 	"FSETID",
	I0318 21:22:58.807370   38073 command_runner.go:130] > # 	"FOWNER",
	I0318 21:22:58.807376   38073 command_runner.go:130] > # 	"SETGID",
	I0318 21:22:58.807384   38073 command_runner.go:130] > # 	"SETUID",
	I0318 21:22:58.807390   38073 command_runner.go:130] > # 	"SETPCAP",
	I0318 21:22:58.807399   38073 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0318 21:22:58.807404   38073 command_runner.go:130] > # 	"KILL",
	I0318 21:22:58.807413   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807424   38073 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0318 21:22:58.807437   38073 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0318 21:22:58.807447   38073 command_runner.go:130] > # add_inheritable_capabilities = false
	I0318 21:22:58.807460   38073 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0318 21:22:58.807472   38073 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 21:22:58.807478   38073 command_runner.go:130] > default_sysctls = [
	I0318 21:22:58.807488   38073 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0318 21:22:58.807492   38073 command_runner.go:130] > ]
	I0318 21:22:58.807502   38073 command_runner.go:130] > # List of devices on the host that a
	I0318 21:22:58.807512   38073 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0318 21:22:58.807521   38073 command_runner.go:130] > # allowed_devices = [
	I0318 21:22:58.807535   38073 command_runner.go:130] > # 	"/dev/fuse",
	I0318 21:22:58.807544   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807552   38073 command_runner.go:130] > # List of additional devices. specified as
	I0318 21:22:58.807568   38073 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0318 21:22:58.807580   38073 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0318 21:22:58.807593   38073 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 21:22:58.807603   38073 command_runner.go:130] > # additional_devices = [
	I0318 21:22:58.807608   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807621   38073 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0318 21:22:58.807630   38073 command_runner.go:130] > # cdi_spec_dirs = [
	I0318 21:22:58.807636   38073 command_runner.go:130] > # 	"/etc/cdi",
	I0318 21:22:58.807645   38073 command_runner.go:130] > # 	"/var/run/cdi",
	I0318 21:22:58.807650   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807664   38073 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0318 21:22:58.807681   38073 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0318 21:22:58.807691   38073 command_runner.go:130] > # Defaults to false.
	I0318 21:22:58.807699   38073 command_runner.go:130] > # device_ownership_from_security_context = false
	I0318 21:22:58.807713   38073 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0318 21:22:58.807725   38073 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0318 21:22:58.807734   38073 command_runner.go:130] > # hooks_dir = [
	I0318 21:22:58.807742   38073 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0318 21:22:58.807750   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807760   38073 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0318 21:22:58.807772   38073 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0318 21:22:58.807784   38073 command_runner.go:130] > # its default mounts from the following two files:
	I0318 21:22:58.807792   38073 command_runner.go:130] > #
	I0318 21:22:58.807800   38073 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0318 21:22:58.807812   38073 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0318 21:22:58.807824   38073 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0318 21:22:58.807832   38073 command_runner.go:130] > #
	I0318 21:22:58.807840   38073 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0318 21:22:58.807853   38073 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0318 21:22:58.807862   38073 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0318 21:22:58.807873   38073 command_runner.go:130] > #      only add mounts it finds in this file.
	I0318 21:22:58.807877   38073 command_runner.go:130] > #
	I0318 21:22:58.807884   38073 command_runner.go:130] > # default_mounts_file = ""
	I0318 21:22:58.807902   38073 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0318 21:22:58.807915   38073 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0318 21:22:58.807924   38073 command_runner.go:130] > pids_limit = 1024
	I0318 21:22:58.807933   38073 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0318 21:22:58.807947   38073 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0318 21:22:58.807961   38073 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0318 21:22:58.807978   38073 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0318 21:22:58.807988   38073 command_runner.go:130] > # log_size_max = -1
	I0318 21:22:58.808001   38073 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0318 21:22:58.808010   38073 command_runner.go:130] > # log_to_journald = false
	I0318 21:22:58.808022   38073 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0318 21:22:58.808033   38073 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0318 21:22:58.808041   38073 command_runner.go:130] > # Path to directory for container attach sockets.
	I0318 21:22:58.808053   38073 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0318 21:22:58.808063   38073 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0318 21:22:58.808073   38073 command_runner.go:130] > # bind_mount_prefix = ""
	I0318 21:22:58.808085   38073 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0318 21:22:58.808095   38073 command_runner.go:130] > # read_only = false
	I0318 21:22:58.808104   38073 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0318 21:22:58.808116   38073 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0318 21:22:58.808125   38073 command_runner.go:130] > # live configuration reload.
	I0318 21:22:58.808132   38073 command_runner.go:130] > # log_level = "info"
	I0318 21:22:58.808143   38073 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0318 21:22:58.808151   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.808160   38073 command_runner.go:130] > # log_filter = ""
	I0318 21:22:58.808169   38073 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0318 21:22:58.808182   38073 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0318 21:22:58.808191   38073 command_runner.go:130] > # separated by comma.
	I0318 21:22:58.808202   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808211   38073 command_runner.go:130] > # uid_mappings = ""
	I0318 21:22:58.808223   38073 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0318 21:22:58.808236   38073 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0318 21:22:58.808245   38073 command_runner.go:130] > # separated by comma.
	I0318 21:22:58.808255   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808265   38073 command_runner.go:130] > # gid_mappings = ""
	I0318 21:22:58.808276   38073 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0318 21:22:58.808296   38073 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 21:22:58.808307   38073 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 21:22:58.808321   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808332   38073 command_runner.go:130] > # minimum_mappable_uid = -1
	I0318 21:22:58.808341   38073 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0318 21:22:58.808354   38073 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 21:22:58.808372   38073 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 21:22:58.808388   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808399   38073 command_runner.go:130] > # minimum_mappable_gid = -1
	I0318 21:22:58.808411   38073 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0318 21:22:58.808424   38073 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0318 21:22:58.808438   38073 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0318 21:22:58.808448   38073 command_runner.go:130] > # ctr_stop_timeout = 30
	I0318 21:22:58.808458   38073 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0318 21:22:58.808469   38073 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0318 21:22:58.808482   38073 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0318 21:22:58.808493   38073 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0318 21:22:58.808502   38073 command_runner.go:130] > drop_infra_ctr = false
	I0318 21:22:58.808515   38073 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0318 21:22:58.808527   38073 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0318 21:22:58.808540   38073 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0318 21:22:58.808549   38073 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0318 21:22:58.808559   38073 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0318 21:22:58.808571   38073 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0318 21:22:58.808585   38073 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0318 21:22:58.808596   38073 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0318 21:22:58.808607   38073 command_runner.go:130] > # shared_cpuset = ""
	I0318 21:22:58.808619   38073 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0318 21:22:58.808630   38073 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0318 21:22:58.808640   38073 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0318 21:22:58.808655   38073 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0318 21:22:58.808665   38073 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0318 21:22:58.808676   38073 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0318 21:22:58.808696   38073 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0318 21:22:58.808706   38073 command_runner.go:130] > # enable_criu_support = false
	I0318 21:22:58.808716   38073 command_runner.go:130] > # Enable/disable the generation of the container,
	I0318 21:22:58.808735   38073 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0318 21:22:58.808745   38073 command_runner.go:130] > # enable_pod_events = false
	I0318 21:22:58.808757   38073 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 21:22:58.808769   38073 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 21:22:58.808787   38073 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0318 21:22:58.808796   38073 command_runner.go:130] > # default_runtime = "runc"
	I0318 21:22:58.808814   38073 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0318 21:22:58.808830   38073 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0318 21:22:58.808848   38073 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0318 21:22:58.808859   38073 command_runner.go:130] > # creation as a file is not desired either.
	I0318 21:22:58.808874   38073 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0318 21:22:58.808885   38073 command_runner.go:130] > # the hostname is being managed dynamically.
	I0318 21:22:58.808895   38073 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0318 21:22:58.808900   38073 command_runner.go:130] > # ]
	I0318 21:22:58.808928   38073 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0318 21:22:58.808942   38073 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0318 21:22:58.808955   38073 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0318 21:22:58.808967   38073 command_runner.go:130] > # Each entry in the table should follow the format:
	I0318 21:22:58.808975   38073 command_runner.go:130] > #
	I0318 21:22:58.808986   38073 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0318 21:22:58.808997   38073 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0318 21:22:58.809061   38073 command_runner.go:130] > # runtime_type = "oci"
	I0318 21:22:58.809073   38073 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0318 21:22:58.809080   38073 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0318 21:22:58.809087   38073 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0318 21:22:58.809093   38073 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0318 21:22:58.809103   38073 command_runner.go:130] > # monitor_env = []
	I0318 21:22:58.809111   38073 command_runner.go:130] > # privileged_without_host_devices = false
	I0318 21:22:58.809120   38073 command_runner.go:130] > # allowed_annotations = []
	I0318 21:22:58.809128   38073 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0318 21:22:58.809137   38073 command_runner.go:130] > # Where:
	I0318 21:22:58.809145   38073 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0318 21:22:58.809157   38073 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0318 21:22:58.809169   38073 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0318 21:22:58.809181   38073 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0318 21:22:58.809190   38073 command_runner.go:130] > #   in $PATH.
	I0318 21:22:58.809205   38073 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0318 21:22:58.809218   38073 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0318 21:22:58.809232   38073 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0318 21:22:58.809241   38073 command_runner.go:130] > #   state.
	I0318 21:22:58.809253   38073 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0318 21:22:58.809265   38073 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0318 21:22:58.809278   38073 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0318 21:22:58.809290   38073 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0318 21:22:58.809299   38073 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0318 21:22:58.809312   38073 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0318 21:22:58.809322   38073 command_runner.go:130] > #   The currently recognized values are:
	I0318 21:22:58.809337   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0318 21:22:58.809353   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0318 21:22:58.809371   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0318 21:22:58.809382   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0318 21:22:58.809398   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0318 21:22:58.809415   38073 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0318 21:22:58.809428   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0318 21:22:58.809438   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0318 21:22:58.809447   38073 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0318 21:22:58.809459   38073 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0318 21:22:58.809468   38073 command_runner.go:130] > #   deprecated option "conmon".
	I0318 21:22:58.809478   38073 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0318 21:22:58.809488   38073 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0318 21:22:58.809497   38073 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0318 21:22:58.809506   38073 command_runner.go:130] > #   should be moved to the container's cgroup
	I0318 21:22:58.809516   38073 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0318 21:22:58.809527   38073 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0318 21:22:58.809538   38073 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0318 21:22:58.809549   38073 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0318 21:22:58.809555   38073 command_runner.go:130] > #
	I0318 21:22:58.809562   38073 command_runner.go:130] > # Using the seccomp notifier feature:
	I0318 21:22:58.809569   38073 command_runner.go:130] > #
	I0318 21:22:58.809577   38073 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0318 21:22:58.809590   38073 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0318 21:22:58.809595   38073 command_runner.go:130] > #
	I0318 21:22:58.809614   38073 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0318 21:22:58.809627   38073 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0318 21:22:58.809634   38073 command_runner.go:130] > #
	I0318 21:22:58.809642   38073 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0318 21:22:58.809650   38073 command_runner.go:130] > # feature.
	I0318 21:22:58.809655   38073 command_runner.go:130] > #
	I0318 21:22:58.809667   38073 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0318 21:22:58.809681   38073 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0318 21:22:58.809693   38073 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0318 21:22:58.809702   38073 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0318 21:22:58.809714   38073 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0318 21:22:58.809722   38073 command_runner.go:130] > #
	I0318 21:22:58.809732   38073 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0318 21:22:58.809744   38073 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0318 21:22:58.809752   38073 command_runner.go:130] > #
	I0318 21:22:58.809761   38073 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0318 21:22:58.809776   38073 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0318 21:22:58.809784   38073 command_runner.go:130] > #
	I0318 21:22:58.809794   38073 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0318 21:22:58.809807   38073 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0318 21:22:58.809816   38073 command_runner.go:130] > # limitation.
	I0318 21:22:58.809822   38073 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0318 21:22:58.809832   38073 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0318 21:22:58.809844   38073 command_runner.go:130] > runtime_type = "oci"
	I0318 21:22:58.809853   38073 command_runner.go:130] > runtime_root = "/run/runc"
	I0318 21:22:58.809862   38073 command_runner.go:130] > runtime_config_path = ""
	I0318 21:22:58.809870   38073 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0318 21:22:58.809876   38073 command_runner.go:130] > monitor_cgroup = "pod"
	I0318 21:22:58.809884   38073 command_runner.go:130] > monitor_exec_cgroup = ""
	I0318 21:22:58.809890   38073 command_runner.go:130] > monitor_env = [
	I0318 21:22:58.809901   38073 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 21:22:58.809906   38073 command_runner.go:130] > ]
	I0318 21:22:58.809916   38073 command_runner.go:130] > privileged_without_host_devices = false
	I0318 21:22:58.809927   38073 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0318 21:22:58.809938   38073 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0318 21:22:58.809950   38073 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0318 21:22:58.809971   38073 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0318 21:22:58.809988   38073 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0318 21:22:58.809999   38073 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0318 21:22:58.810020   38073 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0318 21:22:58.810035   38073 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0318 21:22:58.810046   38073 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0318 21:22:58.810060   38073 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0318 21:22:58.810069   38073 command_runner.go:130] > # Example:
	I0318 21:22:58.810076   38073 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0318 21:22:58.810081   38073 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0318 21:22:58.810088   38073 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0318 21:22:58.810095   38073 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0318 21:22:58.810101   38073 command_runner.go:130] > # cpuset = 0
	I0318 21:22:58.810106   38073 command_runner.go:130] > # cpushares = "0-1"
	I0318 21:22:58.810111   38073 command_runner.go:130] > # Where:
	I0318 21:22:58.810117   38073 command_runner.go:130] > # The workload name is workload-type.
	I0318 21:22:58.810127   38073 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0318 21:22:58.810135   38073 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0318 21:22:58.810143   38073 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0318 21:22:58.810156   38073 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0318 21:22:58.810164   38073 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0318 21:22:58.810172   38073 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0318 21:22:58.810182   38073 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0318 21:22:58.810188   38073 command_runner.go:130] > # Default value is set to true
	I0318 21:22:58.810194   38073 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0318 21:22:58.810203   38073 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0318 21:22:58.810210   38073 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0318 21:22:58.810216   38073 command_runner.go:130] > # Default value is set to 'false'
	I0318 21:22:58.810223   38073 command_runner.go:130] > # disable_hostport_mapping = false
	I0318 21:22:58.810236   38073 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0318 21:22:58.810245   38073 command_runner.go:130] > #
	I0318 21:22:58.810257   38073 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0318 21:22:58.810269   38073 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0318 21:22:58.810282   38073 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0318 21:22:58.810295   38073 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0318 21:22:58.810307   38073 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0318 21:22:58.810322   38073 command_runner.go:130] > [crio.image]
	I0318 21:22:58.810336   38073 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0318 21:22:58.810345   38073 command_runner.go:130] > # default_transport = "docker://"
	I0318 21:22:58.810364   38073 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0318 21:22:58.810376   38073 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0318 21:22:58.810385   38073 command_runner.go:130] > # global_auth_file = ""
	I0318 21:22:58.810397   38073 command_runner.go:130] > # The image used to instantiate infra containers.
	I0318 21:22:58.810408   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.810420   38073 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0318 21:22:58.810439   38073 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0318 21:22:58.810451   38073 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0318 21:22:58.810463   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.810473   38073 command_runner.go:130] > # pause_image_auth_file = ""
	I0318 21:22:58.810485   38073 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0318 21:22:58.810498   38073 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0318 21:22:58.810510   38073 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0318 21:22:58.810523   38073 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0318 21:22:58.810533   38073 command_runner.go:130] > # pause_command = "/pause"
	I0318 21:22:58.810546   38073 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0318 21:22:58.810566   38073 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0318 21:22:58.810578   38073 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0318 21:22:58.810590   38073 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0318 21:22:58.810603   38073 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0318 21:22:58.810616   38073 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0318 21:22:58.810626   38073 command_runner.go:130] > # pinned_images = [
	I0318 21:22:58.810635   38073 command_runner.go:130] > # ]
	I0318 21:22:58.810648   38073 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0318 21:22:58.810660   38073 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0318 21:22:58.810673   38073 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0318 21:22:58.810685   38073 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0318 21:22:58.810696   38073 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0318 21:22:58.810705   38073 command_runner.go:130] > # signature_policy = ""
	I0318 21:22:58.810716   38073 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0318 21:22:58.810729   38073 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0318 21:22:58.810740   38073 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0318 21:22:58.810752   38073 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0318 21:22:58.810769   38073 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0318 21:22:58.810780   38073 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0318 21:22:58.810794   38073 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0318 21:22:58.810807   38073 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0318 21:22:58.810816   38073 command_runner.go:130] > # changing them here.
	I0318 21:22:58.810827   38073 command_runner.go:130] > # insecure_registries = [
	I0318 21:22:58.810835   38073 command_runner.go:130] > # ]
	I0318 21:22:58.810849   38073 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0318 21:22:58.810859   38073 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0318 21:22:58.810869   38073 command_runner.go:130] > # image_volumes = "mkdir"
	I0318 21:22:58.810877   38073 command_runner.go:130] > # Temporary directory to use for storing big files
	I0318 21:22:58.810888   38073 command_runner.go:130] > # big_files_temporary_dir = ""
	I0318 21:22:58.810899   38073 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0318 21:22:58.810908   38073 command_runner.go:130] > # CNI plugins.
	I0318 21:22:58.810915   38073 command_runner.go:130] > [crio.network]
	I0318 21:22:58.810927   38073 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0318 21:22:58.810937   38073 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0318 21:22:58.810946   38073 command_runner.go:130] > # cni_default_network = ""
	I0318 21:22:58.810957   38073 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0318 21:22:58.810967   38073 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0318 21:22:58.810981   38073 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0318 21:22:58.810989   38073 command_runner.go:130] > # plugin_dirs = [
	I0318 21:22:58.810995   38073 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0318 21:22:58.811004   38073 command_runner.go:130] > # ]
	I0318 21:22:58.811012   38073 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0318 21:22:58.811020   38073 command_runner.go:130] > [crio.metrics]
	I0318 21:22:58.811030   38073 command_runner.go:130] > # Globally enable or disable metrics support.
	I0318 21:22:58.811038   38073 command_runner.go:130] > enable_metrics = true
	I0318 21:22:58.811045   38073 command_runner.go:130] > # Specify enabled metrics collectors.
	I0318 21:22:58.811054   38073 command_runner.go:130] > # Per default all metrics are enabled.
	I0318 21:22:58.811065   38073 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0318 21:22:58.811077   38073 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0318 21:22:58.811088   38073 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0318 21:22:58.811097   38073 command_runner.go:130] > # metrics_collectors = [
	I0318 21:22:58.811104   38073 command_runner.go:130] > # 	"operations",
	I0318 21:22:58.811114   38073 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0318 21:22:58.811130   38073 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0318 21:22:58.811140   38073 command_runner.go:130] > # 	"operations_errors",
	I0318 21:22:58.811146   38073 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0318 21:22:58.811155   38073 command_runner.go:130] > # 	"image_pulls_by_name",
	I0318 21:22:58.811165   38073 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0318 21:22:58.811174   38073 command_runner.go:130] > # 	"image_pulls_failures",
	I0318 21:22:58.811183   38073 command_runner.go:130] > # 	"image_pulls_successes",
	I0318 21:22:58.811193   38073 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0318 21:22:58.811202   38073 command_runner.go:130] > # 	"image_layer_reuse",
	I0318 21:22:58.811212   38073 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0318 21:22:58.811221   38073 command_runner.go:130] > # 	"containers_oom_total",
	I0318 21:22:58.811229   38073 command_runner.go:130] > # 	"containers_oom",
	I0318 21:22:58.811235   38073 command_runner.go:130] > # 	"processes_defunct",
	I0318 21:22:58.811243   38073 command_runner.go:130] > # 	"operations_total",
	I0318 21:22:58.811253   38073 command_runner.go:130] > # 	"operations_latency_seconds",
	I0318 21:22:58.811262   38073 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0318 21:22:58.811272   38073 command_runner.go:130] > # 	"operations_errors_total",
	I0318 21:22:58.811282   38073 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0318 21:22:58.811291   38073 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0318 21:22:58.811302   38073 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0318 21:22:58.811312   38073 command_runner.go:130] > # 	"image_pulls_success_total",
	I0318 21:22:58.811319   38073 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0318 21:22:58.811328   38073 command_runner.go:130] > # 	"containers_oom_count_total",
	I0318 21:22:58.811338   38073 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0318 21:22:58.811347   38073 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0318 21:22:58.811355   38073 command_runner.go:130] > # ]
	I0318 21:22:58.811371   38073 command_runner.go:130] > # The port on which the metrics server will listen.
	I0318 21:22:58.811380   38073 command_runner.go:130] > # metrics_port = 9090
	I0318 21:22:58.811392   38073 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0318 21:22:58.811401   38073 command_runner.go:130] > # metrics_socket = ""
	I0318 21:22:58.811415   38073 command_runner.go:130] > # The certificate for the secure metrics server.
	I0318 21:22:58.811425   38073 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0318 21:22:58.811437   38073 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0318 21:22:58.811447   38073 command_runner.go:130] > # certificate on any modification event.
	I0318 21:22:58.811456   38073 command_runner.go:130] > # metrics_cert = ""
	I0318 21:22:58.811467   38073 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0318 21:22:58.811484   38073 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0318 21:22:58.811493   38073 command_runner.go:130] > # metrics_key = ""
	I0318 21:22:58.811500   38073 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0318 21:22:58.811508   38073 command_runner.go:130] > [crio.tracing]
	I0318 21:22:58.811525   38073 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0318 21:22:58.811533   38073 command_runner.go:130] > # enable_tracing = false
	I0318 21:22:58.811550   38073 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0318 21:22:58.811560   38073 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0318 21:22:58.811574   38073 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0318 21:22:58.811583   38073 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0318 21:22:58.811589   38073 command_runner.go:130] > # CRI-O NRI configuration.
	I0318 21:22:58.811597   38073 command_runner.go:130] > [crio.nri]
	I0318 21:22:58.811603   38073 command_runner.go:130] > # Globally enable or disable NRI.
	I0318 21:22:58.811611   38073 command_runner.go:130] > # enable_nri = false
	I0318 21:22:58.811620   38073 command_runner.go:130] > # NRI socket to listen on.
	I0318 21:22:58.811629   38073 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0318 21:22:58.811638   38073 command_runner.go:130] > # NRI plugin directory to use.
	I0318 21:22:58.811649   38073 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0318 21:22:58.811659   38073 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0318 21:22:58.811670   38073 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0318 21:22:58.811677   38073 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0318 21:22:58.811686   38073 command_runner.go:130] > # nri_disable_connections = false
	I0318 21:22:58.811696   38073 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0318 21:22:58.811706   38073 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0318 21:22:58.811716   38073 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0318 21:22:58.811726   38073 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0318 21:22:58.811739   38073 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0318 21:22:58.811746   38073 command_runner.go:130] > [crio.stats]
	I0318 21:22:58.811755   38073 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0318 21:22:58.811765   38073 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0318 21:22:58.811774   38073 command_runner.go:130] > # stats_collection_period = 0
	I0318 21:22:58.812088   38073 command_runner.go:130] ! time="2024-03-18 21:22:58.770503705Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0318 21:22:58.812114   38073 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0318 21:22:58.812280   38073 cni.go:84] Creating CNI manager for ""
	I0318 21:22:58.812307   38073 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 21:22:58.812324   38073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:22:58.812355   38073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-119391 NodeName:multinode-119391 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:22:58.812495   38073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-119391"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:22:58.812553   38073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:22:58.823059   38073 command_runner.go:130] > kubeadm
	I0318 21:22:58.823078   38073 command_runner.go:130] > kubectl
	I0318 21:22:58.823083   38073 command_runner.go:130] > kubelet
	I0318 21:22:58.823101   38073 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:22:58.823142   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:22:58.833104   38073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0318 21:22:58.851789   38073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:22:58.870357   38073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 21:22:58.888730   38073 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0318 21:22:58.892858   38073 command_runner.go:130] > 192.168.39.127	control-plane.minikube.internal
	I0318 21:22:58.893007   38073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:22:59.046371   38073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:22:59.062990   38073 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391 for IP: 192.168.39.127
	I0318 21:22:59.063003   38073 certs.go:194] generating shared ca certs ...
	I0318 21:22:59.063018   38073 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:22:59.063167   38073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:22:59.063224   38073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:22:59.063238   38073 certs.go:256] generating profile certs ...
	I0318 21:22:59.063343   38073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/client.key
	I0318 21:22:59.063428   38073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.key.385a54af
	I0318 21:22:59.063475   38073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.key
	I0318 21:22:59.063489   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 21:22:59.063508   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 21:22:59.063524   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 21:22:59.063540   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 21:22:59.063554   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 21:22:59.063572   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 21:22:59.063590   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 21:22:59.063607   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 21:22:59.063674   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:22:59.063714   38073 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:22:59.063732   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:22:59.063774   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:22:59.063806   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:22:59.063835   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:22:59.063884   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:22:59.063922   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.063941   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.063963   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.064853   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:22:59.092594   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:22:59.119477   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:22:59.150022   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:22:59.176097   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:22:59.202972   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:22:59.228806   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:22:59.255680   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:22:59.282472   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:22:59.308911   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:22:59.334906   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:22:59.361021   38073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:22:59.378745   38073 ssh_runner.go:195] Run: openssl version
	I0318 21:22:59.384894   38073 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 21:22:59.385093   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:22:59.396854   38073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.401830   38073 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.401861   38073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.401899   38073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.407853   38073 command_runner.go:130] > b5213941
	I0318 21:22:59.407916   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:22:59.417952   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:22:59.429751   38073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.434655   38073 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.434717   38073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.434758   38073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.440892   38073 command_runner.go:130] > 51391683
	I0318 21:22:59.440962   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:22:59.450682   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:22:59.462070   38073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.467117   38073 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.467213   38073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.467247   38073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.473329   38073 command_runner.go:130] > 3ec20f2e
	I0318 21:22:59.473402   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:22:59.483377   38073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:22:59.488134   38073 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:22:59.488147   38073 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 21:22:59.488153   38073 command_runner.go:130] > Device: 253,1	Inode: 8385597     Links: 1
	I0318 21:22:59.488159   38073 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 21:22:59.488164   38073 command_runner.go:130] > Access: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488169   38073 command_runner.go:130] > Modify: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488174   38073 command_runner.go:130] > Change: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488181   38073 command_runner.go:130] >  Birth: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488477   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:22:59.494840   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.494878   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:22:59.500851   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.501014   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:22:59.506924   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.507175   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:22:59.513168   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.513231   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:22:59.519343   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.519408   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:22:59.525091   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.525417   38073 kubeadm.go:391] StartCluster: {Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:22:59.525529   38073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:22:59.525559   38073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:22:59.565504   38073 command_runner.go:130] > 0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb
	I0318 21:22:59.565532   38073 command_runner.go:130] > 398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11
	I0318 21:22:59.565546   38073 command_runner.go:130] > 96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207
	I0318 21:22:59.565557   38073 command_runner.go:130] > 5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14
	I0318 21:22:59.565565   38073 command_runner.go:130] > 43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902
	I0318 21:22:59.565577   38073 command_runner.go:130] > d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb
	I0318 21:22:59.565586   38073 command_runner.go:130] > e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb
	I0318 21:22:59.565602   38073 command_runner.go:130] > fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7
	I0318 21:22:59.565629   38073 cri.go:89] found id: "0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb"
	I0318 21:22:59.565638   38073 cri.go:89] found id: "398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11"
	I0318 21:22:59.565641   38073 cri.go:89] found id: "96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207"
	I0318 21:22:59.565645   38073 cri.go:89] found id: "5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14"
	I0318 21:22:59.565647   38073 cri.go:89] found id: "43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902"
	I0318 21:22:59.565650   38073 cri.go:89] found id: "d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb"
	I0318 21:22:59.565652   38073 cri.go:89] found id: "e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb"
	I0318 21:22:59.565655   38073 cri.go:89] found id: "fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7"
	I0318 21:22:59.565657   38073 cri.go:89] found id: ""
	I0318 21:22:59.565708   38073 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.599305514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797071599280969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34e9ed3b-3b68-444f-9e63-c1bab014a866 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.599973503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df2d1fbd-d20b-4a30-9b22-dac1d69ee43d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.600029496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df2d1fbd-d20b-4a30-9b22-dac1d69ee43d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.600389708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df2d1fbd-d20b-4a30-9b22-dac1d69ee43d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.649250400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd9095a6-6a7b-4555-8cbc-e0ec07653a5e name=/runtime.v1.RuntimeService/Version
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.649354715Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd9095a6-6a7b-4555-8cbc-e0ec07653a5e name=/runtime.v1.RuntimeService/Version
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.650861423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=346d60c6-3134-48d1-9675-dc1242a20637 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.651269894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797071651247883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=346d60c6-3134-48d1-9675-dc1242a20637 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.651871298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=109c0007-d9eb-4a75-9ba8-370741d03140 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.651952658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=109c0007-d9eb-4a75-9ba8-370741d03140 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.652290916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=109c0007-d9eb-4a75-9ba8-370741d03140 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.705131361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=430a2f14-da37-4de1-9402-ec6a6d62fe0a name=/runtime.v1.RuntimeService/Version
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.705195356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=430a2f14-da37-4de1-9402-ec6a6d62fe0a name=/runtime.v1.RuntimeService/Version
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.706953650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=233e3df7-60a4-4ce3-8fe3-11707bfd380f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.707345096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797071707325544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=233e3df7-60a4-4ce3-8fe3-11707bfd380f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.708034444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=583853ae-2c03-40ba-b138-5390a3d490fa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.708089432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=583853ae-2c03-40ba-b138-5390a3d490fa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.708402986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=583853ae-2c03-40ba-b138-5390a3d490fa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.761153866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eef10f70-006a-43ac-a79c-9dee276a3c8d name=/runtime.v1.RuntimeService/Version
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.761244218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eef10f70-006a-43ac-a79c-9dee276a3c8d name=/runtime.v1.RuntimeService/Version
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.762205704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3b5dd72-1945-4545-a0cd-efdf5409a541 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.762803420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797071762779332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3b5dd72-1945-4545-a0cd-efdf5409a541 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.763233899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7d37a5f-6957-4fb6-af65-0f495628e6c2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.763316531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7d37a5f-6957-4fb6-af65-0f495628e6c2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:24:31 multinode-119391 crio[2889]: time="2024-03-18 21:24:31.763772578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7d37a5f-6957-4fb6-af65-0f495628e6c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8b45b7562748e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      51 seconds ago       Running             busybox                   1                   dc20f1507c314       busybox-5b5d89c9d6-dr5bb
	97ab1bff4ddf1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   547739c0e2721       kindnet-6zr7q
	c2ab6c1338fff       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   f0b5dedde7187       coredns-5dd5756b68-xj892
	1b711be87d96c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   2d1cf8bb01a04       kube-proxy-c9wgb
	e2e6849218ee2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   64735673489b9       storage-provisioner
	52147f9d7d0df       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   442a4c7855fd5       kube-scheduler-multinode-119391
	758c08c47f939       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   b7fc4ed62592a       kube-controller-manager-multinode-119391
	5579372217f2f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   ba57e976bf657       etcd-multinode-119391
	96a1cf12459b1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   ebd15a78f9d6c       kube-apiserver-multinode-119391
	932861c3dfa0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   af7475bf9389b       busybox-5b5d89c9d6-dr5bb
	0510b1eb0ef35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   01d9677ae8258       storage-provisioner
	398bf6b192733       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   47ba2c35bc6ad       coredns-5dd5756b68-xj892
	96ec94d755227       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   62625602ffb83       kindnet-6zr7q
	5c6e17a452796       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   a964aaa38e35f       kube-proxy-c9wgb
	43b05d04b29b4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   c6cf59e3b1331       kube-scheduler-multinode-119391
	d889df6742370       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   d52b4552b2062       kube-apiserver-multinode-119391
	e6fd37ada119d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   acffb6afe556b       kube-controller-manager-multinode-119391
	fb5aec4cb8dd3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   aa820b6a5ec03       etcd-multinode-119391
	
	
	==> coredns [398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11] <==
	[INFO] 10.244.1.2:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726354s
	[INFO] 10.244.1.2:59505 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110534s
	[INFO] 10.244.1.2:43397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201607s
	[INFO] 10.244.1.2:59395 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00113884s
	[INFO] 10.244.1.2:47829 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201218s
	[INFO] 10.244.1.2:35041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111801s
	[INFO] 10.244.1.2:42015 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095353s
	[INFO] 10.244.0.3:44370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007968s
	[INFO] 10.244.0.3:45256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046964s
	[INFO] 10.244.0.3:36937 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089471s
	[INFO] 10.244.0.3:51478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004386s
	[INFO] 10.244.1.2:35131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140111s
	[INFO] 10.244.1.2:49384 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135795s
	[INFO] 10.244.1.2:50850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007096s
	[INFO] 10.244.1.2:37905 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075959s
	[INFO] 10.244.0.3:37500 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107301s
	[INFO] 10.244.0.3:53084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138644s
	[INFO] 10.244.0.3:37651 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009517s
	[INFO] 10.244.0.3:36490 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145484s
	[INFO] 10.244.1.2:55397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151193s
	[INFO] 10.244.1.2:47870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131156s
	[INFO] 10.244.1.2:49477 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179025s
	[INFO] 10.244.1.2:32943 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210468s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45688 - 3851 "HINFO IN 1606542482993132714.4655986967184080250. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016265241s
	
	
	==> describe nodes <==
	Name:               multinode-119391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-119391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=multinode-119391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T21_16_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:16:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-119391
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:24:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:16:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:16:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:16:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-119391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ced609a92f1a46e48e0bce516406bccd
	  System UUID:                ced609a9-2f1a-46e4-8e0b-ce516406bccd
	  Boot ID:                    9d164ddd-7fc2-478d-af34-eedda433089a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dr5bb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	  kube-system                 coredns-5dd5756b68-xj892                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m32s
	  kube-system                 etcd-multinode-119391                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kindnet-6zr7q                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m32s
	  kube-system                 kube-apiserver-multinode-119391             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-controller-manager-multinode-119391    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-c9wgb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-scheduler-multinode-119391             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m29s              kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m45s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m45s              kubelet          Node multinode-119391 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s              kubelet          Node multinode-119391 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s              kubelet          Node multinode-119391 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m45s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m33s              node-controller  Node multinode-119391 event: Registered Node multinode-119391 in Controller
	  Normal  NodeReady                7m25s              kubelet          Node multinode-119391 status is now: NodeReady
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node multinode-119391 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node multinode-119391 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x7 over 91s)  kubelet          Node multinode-119391 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           74s                node-controller  Node multinode-119391 event: Registered Node multinode-119391 in Controller
	
	
	Name:               multinode-119391-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-119391-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=multinode-119391
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T21_23_49_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:23:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-119391-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:23:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:23:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:23:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:23:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    multinode-119391-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b79ff4b3ad8948948e81374df92dd3d1
	  System UUID:                b79ff4b3-ad89-4894-8e81-374df92dd3d1
	  Boot ID:                    e5004a05-e979-4cd9-842c-90a668098a75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zxfmj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kindnet-hb4lj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m52s
	  kube-system                 kube-proxy-n5fr8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m47s                  kube-proxy  
	  Normal  Starting                 40s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m52s (x5 over 6m54s)  kubelet     Node multinode-119391-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m52s (x5 over 6m54s)  kubelet     Node multinode-119391-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m52s (x5 over 6m54s)  kubelet     Node multinode-119391-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m43s                  kubelet     Node multinode-119391-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  44s (x5 over 45s)      kubelet     Node multinode-119391-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 45s)      kubelet     Node multinode-119391-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 45s)      kubelet     Node multinode-119391-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                36s                    kubelet     Node multinode-119391-m02 status is now: NodeReady
	
	
	Name:               multinode-119391-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-119391-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=multinode-119391
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T21_24_19_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:24:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-119391-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:24:28 +0000   Mon, 18 Mar 2024 21:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:24:28 +0000   Mon, 18 Mar 2024 21:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:24:28 +0000   Mon, 18 Mar 2024 21:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:24:28 +0000   Mon, 18 Mar 2024 21:24:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    multinode-119391-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b7d51d9140d4eb5b742727baba7138f
	  System UUID:                2b7d51d9-140d-4eb5-b742-727baba7138f
	  Boot ID:                    e061719c-ddaa-483e-a984-945e7b176bd9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hhjx2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-proxy-9df9r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  Starting                 8s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m4s (x5 over 6m6s)    kubelet          Node multinode-119391-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x5 over 6m6s)    kubelet          Node multinode-119391-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x5 over 6m6s)    kubelet          Node multinode-119391-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m55s                  kubelet          Node multinode-119391-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m22s (x5 over 5m23s)  kubelet          Node multinode-119391-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m22s (x5 over 5m23s)  kubelet          Node multinode-119391-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m22s (x5 over 5m23s)  kubelet          Node multinode-119391-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m14s                  kubelet          Node multinode-119391-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  14s (x5 over 15s)      kubelet          Node multinode-119391-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x5 over 15s)      kubelet          Node multinode-119391-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x5 over 15s)      kubelet          Node multinode-119391-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                     node-controller  Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-119391-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.175632] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.148042] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.296231] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +5.292192] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.064543] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.635260] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[  +0.569576] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.194211] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.090245] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.248848] systemd-fstab-generator[1487]: Ignoring "noauto" option for root device
	[  +0.110889] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 21:17] kauditd_printk_skb: 51 callbacks suppressed
	[ +46.797001] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 21:22] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.171494] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.171950] systemd-fstab-generator[2833]: Ignoring "noauto" option for root device
	[  +0.140102] systemd-fstab-generator[2845]: Ignoring "noauto" option for root device
	[  +0.305594] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +2.231356] systemd-fstab-generator[2975]: Ignoring "noauto" option for root device
	[  +1.919428] systemd-fstab-generator[3099]: Ignoring "noauto" option for root device
	[Mar18 21:23] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.016186] kauditd_printk_skb: 55 callbacks suppressed
	[ +12.085266] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.246564] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[ +19.191078] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184] <==
	{"level":"info","ts":"2024-03-18T21:23:02.597366Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T21:23:02.597378Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T21:23:02.597765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)"}
	{"level":"info","ts":"2024-03-18T21:23:02.597857Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","added-peer-id":"9dc5e8b969e9632c","added-peer-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-03-18T21:23:02.598004Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:23:02.598031Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:23:02.612272Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T21:23:02.612462Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9dc5e8b969e9632c","initial-advertise-peer-urls":["https://192.168.39.127:2380"],"listen-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.127:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T21:23:02.612527Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T21:23:02.612696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:23:02.612727Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:23:04.237201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T21:23:04.237264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T21:23:04.237348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgPreVoteResp from 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-03-18T21:23:04.237365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.237412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.237424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.237432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.246766Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:multinode-119391 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T21:23:04.24691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:23:04.247186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:23:04.248473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	{"level":"info","ts":"2024-03-18T21:23:04.248637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T21:23:04.248864Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T21:23:04.2489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7] <==
	{"level":"info","ts":"2024-03-18T21:16:43.004101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 2"}
	{"level":"info","ts":"2024-03-18T21:16:43.004109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-03-18T21:16:43.005439Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:multinode-119391 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T21:16:43.005654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:16:43.006486Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	{"level":"info","ts":"2024-03-18T21:16:43.006686Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:16:43.006828Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:16:43.007742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T21:16:43.007968Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T21:16:43.008012Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T21:16:43.016774Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:16:43.016875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:16:43.01692Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-03-18T21:18:27.198539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.863106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-03-18T21:18:27.199313Z","caller":"traceutil/trace.go:171","msg":"trace[188100761] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:575; }","duration":"133.754056ms","start":"2024-03-18T21:18:27.065505Z","end":"2024-03-18T21:18:27.199259Z","steps":["trace[188100761] 'range keys from in-memory index tree'  (duration: 132.623314ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T21:21:24.539545Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T21:21:24.539866Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-119391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	{"level":"warn","ts":"2024-03-18T21:21:24.540078Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T21:21:24.540221Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T21:21:24.630637Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T21:21:24.630863Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T21:21:24.630974Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9dc5e8b969e9632c","current-leader-member-id":"9dc5e8b969e9632c"}
	{"level":"info","ts":"2024-03-18T21:21:24.63363Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:21:24.633762Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:21:24.633801Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-119391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	
	
	==> kernel <==
	 21:24:32 up 8 min,  0 users,  load average: 0.13, 0.20, 0.11
	Linux multinode-119391 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207] <==
	I0318 21:20:37.729983       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:20:47.736363       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:20:47.736385       1 main.go:227] handling current node
	I0318 21:20:47.736394       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:20:47.736405       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:20:47.736708       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:20:47.736725       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:20:57.741541       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:20:57.741716       1 main.go:227] handling current node
	I0318 21:20:57.741740       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:20:57.741758       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:20:57.741887       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:20:57.741908       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:21:07.753614       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:21:07.753754       1 main.go:227] handling current node
	I0318 21:21:07.753787       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:21:07.753807       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:21:07.753979       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:21:07.754117       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:21:17.765254       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:21:17.765304       1 main.go:227] handling current node
	I0318 21:21:17.765314       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:21:17.765320       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:21:17.765437       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:21:17.765472       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae] <==
	I0318 21:23:47.950058       1 main.go:227] handling current node
	I0318 21:23:47.950081       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:23:47.950110       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:23:57.955957       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:23:57.956017       1 main.go:227] handling current node
	I0318 21:23:57.956037       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:23:57.956053       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:23:57.956246       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:23:57.956293       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:24:07.970696       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:24:07.970833       1 main.go:227] handling current node
	I0318 21:24:07.970866       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:24:07.970889       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:24:07.971065       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:24:07.971090       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:24:17.983001       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:24:17.983058       1 main.go:227] handling current node
	I0318 21:24:17.983068       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:24:17.983074       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:24:27.998063       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:24:27.998231       1 main.go:227] handling current node
	I0318 21:24:27.998267       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:24:27.998293       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:24:27.998487       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:24:27.998512       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11] <==
	I0318 21:23:05.723081       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 21:23:05.723117       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 21:23:05.723151       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 21:23:05.785734       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 21:23:05.786006       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 21:23:05.834997       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 21:23:05.836219       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 21:23:05.836260       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 21:23:05.838344       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 21:23:05.839030       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 21:23:05.839204       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 21:23:05.839246       1 aggregator.go:166] initial CRD sync complete...
	I0318 21:23:05.839265       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 21:23:05.839270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 21:23:05.839274       1 cache.go:39] Caches are synced for autoregister controller
	I0318 21:23:05.855467       1 shared_informer.go:318] Caches are synced for configmaps
	E0318 21:23:05.867500       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 21:23:06.648880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 21:23:08.544850       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 21:23:08.691041       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 21:23:08.706847       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 21:23:08.785241       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 21:23:08.792493       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 21:23:18.800884       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 21:23:18.950008       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb] <==
	W0318 21:21:24.572377       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.572445       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.572502       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.572687       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0318 21:21:24.573203       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573321       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573512       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573676       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573854       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0318 21:21:24.574522       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574669       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574696       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574736       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574797       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574826       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574883       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574914       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574982       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575040       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575099       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575159       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575222       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575301       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575388       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575486       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a] <==
	I0318 21:23:42.558098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="20.241316ms"
	I0318 21:23:42.558201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.89µs"
	I0318 21:23:48.527390       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m02\" does not exist"
	I0318 21:23:48.529876       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-w6n2g" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-w6n2g"
	I0318 21:23:48.541975       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m02" podCIDRs=["10.244.1.0/24"]
	I0318 21:23:48.995825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="73.813µs"
	I0318 21:23:49.047009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="123.483µs"
	I0318 21:23:49.061832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="81.679µs"
	I0318 21:23:49.073047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.106µs"
	I0318 21:23:49.083139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="61.341µs"
	I0318 21:23:49.087721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="172.995µs"
	I0318 21:23:49.088180       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-w6n2g" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-w6n2g"
	I0318 21:23:56.033606       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:23:56.056438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.75µs"
	I0318 21:23:56.066718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.646µs"
	I0318 21:23:58.810468       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-zxfmj" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-zxfmj"
	I0318 21:24:00.234792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.506873ms"
	I0318 21:24:00.236440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.847µs"
	I0318 21:24:16.367925       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:24:18.813079       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-119391-m03 event: Removing Node multinode-119391-m03 from Controller"
	I0318 21:24:18.903532       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m03\" does not exist"
	I0318 21:24:18.903793       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:24:18.917492       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m03" podCIDRs=["10.244.2.0/24"]
	I0318 21:24:23.813833       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller"
	I0318 21:24:28.606366       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m03"
	
	
	==> kube-controller-manager [e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb] <==
	I0318 21:18:28.490666       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m03\" does not exist"
	I0318 21:18:28.491261       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:18:28.516196       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m03" podCIDRs=["10.244.2.0/24"]
	I0318 21:18:28.528530       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9df9r"
	I0318 21:18:28.528635       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhjx2"
	I0318 21:18:29.584293       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-119391-m03"
	I0318 21:18:29.584379       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller"
	I0318 21:18:37.324663       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:19:08.280793       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:19:09.606820       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-119391-m03 event: Removing Node multinode-119391-m03 from Controller"
	I0318 21:19:10.733700       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m03\" does not exist"
	I0318 21:19:10.737197       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:19:10.750020       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m03" podCIDRs=["10.244.3.0/24"]
	I0318 21:19:14.607698       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller"
	I0318 21:19:18.049644       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:20:04.640417       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:20:04.641516       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-119391-m03 status is now: NodeNotReady"
	I0318 21:20:04.646708       1 event.go:307] "Event occurred" object="multinode-119391-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-119391-m02 status is now: NodeNotReady"
	I0318 21:20:04.656300       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-9df9r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.661036       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-w6n2g" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.676326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.239044ms"
	I0318 21:20:04.682225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.409µs"
	I0318 21:20:04.682026       1 event.go:307] "Event occurred" object="kube-system/kindnet-hhjx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.683976       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-n5fr8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.696783       1 event.go:307] "Event occurred" object="kube-system/kindnet-hb4lj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec] <==
	I0318 21:23:07.146266       1 server_others.go:69] "Using iptables proxy"
	I0318 21:23:07.161482       1 node.go:141] Successfully retrieved node IP: 192.168.39.127
	I0318 21:23:07.263647       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 21:23:07.263703       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:23:07.269502       1 server_others.go:152] "Using iptables Proxier"
	I0318 21:23:07.269671       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:23:07.269996       1 server.go:846] "Version info" version="v1.28.4"
	I0318 21:23:07.270033       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:23:07.272037       1 config.go:188] "Starting service config controller"
	I0318 21:23:07.272081       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:23:07.272105       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:23:07.272108       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:23:07.272474       1 config.go:315] "Starting node config controller"
	I0318 21:23:07.272515       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:23:07.373064       1 shared_informer.go:318] Caches are synced for node config
	I0318 21:23:07.373113       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:23:07.373136       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14] <==
	I0318 21:17:02.937304       1 server_others.go:69] "Using iptables proxy"
	I0318 21:17:02.954712       1 node.go:141] Successfully retrieved node IP: 192.168.39.127
	I0318 21:17:03.004960       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 21:17:03.005002       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:17:03.008032       1 server_others.go:152] "Using iptables Proxier"
	I0318 21:17:03.008849       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:17:03.009139       1 server.go:846] "Version info" version="v1.28.4"
	I0318 21:17:03.009174       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:17:03.011427       1 config.go:188] "Starting service config controller"
	I0318 21:17:03.011856       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:17:03.011915       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:17:03.011921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:17:03.014133       1 config.go:315] "Starting node config controller"
	I0318 21:17:03.014170       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:17:03.112295       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 21:17:03.112356       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:17:03.114381       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902] <==
	W0318 21:16:44.613751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 21:16:44.614250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 21:16:44.613800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 21:16:44.614265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 21:16:44.613397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 21:16:44.614277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 21:16:45.495850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 21:16:45.495997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 21:16:45.564811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 21:16:45.564860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 21:16:45.618170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 21:16:45.620618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 21:16:45.629294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 21:16:45.629345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 21:16:45.774029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 21:16:45.774148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 21:16:45.847506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 21:16:45.847999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 21:16:46.105929       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 21:16:46.107144       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 21:16:48.404313       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:21:24.534598       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 21:21:24.534775       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 21:21:24.535166       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0318 21:21:24.549784       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3] <==
	I0318 21:23:02.807247       1 serving.go:348] Generated self-signed cert in-memory
	W0318 21:23:05.747953       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 21:23:05.748009       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 21:23:05.748021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 21:23:05.748027       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 21:23:05.797060       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 21:23:05.797109       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:23:05.804302       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 21:23:05.804462       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 21:23:05.804519       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:23:05.804544       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 21:23:05.904667       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.079951    3106 topology_manager.go:215] "Topology Admit Handler" podUID="227a8900-d2de-4014-8d65-71e10e4da7ce" podNamespace="kube-system" podName="kindnet-6zr7q"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.080738    3106 topology_manager.go:215] "Topology Admit Handler" podUID="e37a8f5f-a4f2-46bc-b180-7bca46e587f9" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.080864    3106 topology_manager.go:215] "Topology Admit Handler" podUID="4c138ceb-99bf-4e93-a44b-e5feba8348a0" podNamespace="default" podName="busybox-5b5d89c9d6-dr5bb"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.095865    3106 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.177525    3106 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e37a8f5f-a4f2-46bc-b180-7bca46e587f9-tmp\") pod \"storage-provisioner\" (UID: \"e37a8f5f-a4f2-46bc-b180-7bca46e587f9\") " pod="kube-system/storage-provisioner"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.177924    3106 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/227a8900-d2de-4014-8d65-71e10e4da7ce-cni-cfg\") pod \"kindnet-6zr7q\" (UID: \"227a8900-d2de-4014-8d65-71e10e4da7ce\") " pod="kube-system/kindnet-6zr7q"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.178023    3106 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/227a8900-d2de-4014-8d65-71e10e4da7ce-xtables-lock\") pod \"kindnet-6zr7q\" (UID: \"227a8900-d2de-4014-8d65-71e10e4da7ce\") " pod="kube-system/kindnet-6zr7q"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.178238    3106 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/227a8900-d2de-4014-8d65-71e10e4da7ce-lib-modules\") pod \"kindnet-6zr7q\" (UID: \"227a8900-d2de-4014-8d65-71e10e4da7ce\") " pod="kube-system/kindnet-6zr7q"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.178321    3106 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4310f17f-f7dc-43c8-b39f-87b1169e801e-xtables-lock\") pod \"kube-proxy-c9wgb\" (UID: \"4310f17f-f7dc-43c8-b39f-87b1169e801e\") " pod="kube-system/kube-proxy-c9wgb"
	Mar 18 21:23:06 multinode-119391 kubelet[3106]: I0318 21:23:06.178466    3106 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4310f17f-f7dc-43c8-b39f-87b1169e801e-lib-modules\") pod \"kube-proxy-c9wgb\" (UID: \"4310f17f-f7dc-43c8-b39f-87b1169e801e\") " pod="kube-system/kube-proxy-c9wgb"
	Mar 18 21:23:09 multinode-119391 kubelet[3106]: I0318 21:23:09.683213    3106 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.157953    3106 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:24:01 multinode-119391 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:24:01 multinode-119391 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:24:01 multinode-119391 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:24:01 multinode-119391 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.232612    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda5685ec6-fd70-4637-a858-742004871377/crio-47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9: Error finding container 47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9: Status 404 returned error can't find the container with id 47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.232900    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4c138ceb-99bf-4e93-a44b-e5feba8348a0/crio-af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d: Error finding container af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d: Status 404 returned error can't find the container with id af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.233319    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf4976ceef730c00fb0e0a79a308bfcc6/crio-d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e: Error finding container d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e: Status 404 returned error can't find the container with id d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.233846    3106 manager.go:1106] Failed to create existing container: /kubepods/pod227a8900-d2de-4014-8d65-71e10e4da7ce/crio-62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f: Error finding container 62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f: Status 404 returned error can't find the container with id 62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.234410    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod2bad22a9c0de7732043fa0fb0828f2b8/crio-c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864: Error finding container c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864: Status 404 returned error can't find the container with id c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.234715    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4310f17f-f7dc-43c8-b39f-87b1169e801e/crio-a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa: Error finding container a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa: Status 404 returned error can't find the container with id a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.235236    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod35c7403e982fd0b2e0f9e873df315329/crio-acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63: Error finding container acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63: Status 404 returned error can't find the container with id acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.236013    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode37a8f5f-a4f2-46bc-b180-7bca46e587f9/crio-01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a: Error finding container 01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a: Status 404 returned error can't find the container with id 01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a
	Mar 18 21:24:01 multinode-119391 kubelet[3106]: E0318 21:24:01.236264    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod090796776b5603794e61ee5620edcec7/crio-aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c: Error finding container aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c: Status 404 returned error can't find the container with id aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:24:31.289980   38881 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18421-5321/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-119391 -n multinode-119391
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-119391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (312.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 stop
E0318 21:25:14.157490   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:25:23.236635   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119391 stop: exit status 82 (2m0.467536146s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-119391-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-119391 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119391 status: exit status 3 (18.760900412s)

                                                
                                                
-- stdout --
	multinode-119391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119391-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:26:54.757165   39411 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host
	E0318 21:26:54.757208   39411 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.159:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-119391 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-119391 -n multinode-119391
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-119391 logs -n 25: (1.583082526s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391:/home/docker/cp-test_multinode-119391-m02_multinode-119391.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391 sudo cat                                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m02_multinode-119391.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03:/home/docker/cp-test_multinode-119391-m02_multinode-119391-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391-m03 sudo cat                                   | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m02_multinode-119391-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp testdata/cp-test.txt                                                | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2904894668/001/cp-test_multinode-119391-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391:/home/docker/cp-test_multinode-119391-m03_multinode-119391.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391 sudo cat                                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m03_multinode-119391.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02:/home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391-m02 sudo cat                                   | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-119391 node stop m03                                                          | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	| node    | multinode-119391 node start                                                             | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:19 UTC |                     |
	| stop    | -p multinode-119391                                                                     | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:19 UTC |                     |
	| start   | -p multinode-119391                                                                     | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:21 UTC | 18 Mar 24 21:24 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC |                     |
	| node    | multinode-119391 node delete                                                            | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC | 18 Mar 24 21:24 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-119391 stop                                                                   | multinode-119391 | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:21:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:21:23.660930   38073 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:21:23.661047   38073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:21:23.661056   38073 out.go:304] Setting ErrFile to fd 2...
	I0318 21:21:23.661060   38073 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:21:23.661222   38073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:21:23.661714   38073 out.go:298] Setting JSON to false
	I0318 21:21:23.662563   38073 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3828,"bootTime":1710793056,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:21:23.662615   38073 start.go:139] virtualization: kvm guest
	I0318 21:21:23.665135   38073 out.go:177] * [multinode-119391] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:21:23.666571   38073 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:21:23.666576   38073 notify.go:220] Checking for updates...
	I0318 21:21:23.667976   38073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:21:23.669442   38073 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:21:23.670643   38073 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:21:23.671885   38073 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:21:23.673235   38073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:21:23.675215   38073 config.go:182] Loaded profile config "multinode-119391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:21:23.675336   38073 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:21:23.675890   38073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:21:23.675947   38073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:21:23.692687   38073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0318 21:21:23.693080   38073 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:21:23.693675   38073 main.go:141] libmachine: Using API Version  1
	I0318 21:21:23.693710   38073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:21:23.694075   38073 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:21:23.694276   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:21:23.726767   38073 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:21:23.727913   38073 start.go:297] selected driver: kvm2
	I0318 21:21:23.727924   38073 start.go:901] validating driver "kvm2" against &{Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:21:23.728060   38073 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:21:23.728372   38073 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:21:23.728443   38073 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:21:23.743257   38073 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:21:23.743865   38073 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:21:23.743935   38073 cni.go:84] Creating CNI manager for ""
	I0318 21:21:23.743949   38073 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 21:21:23.744002   38073 start.go:340] cluster config:
	{Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-119391 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:21:23.744129   38073 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:21:23.745747   38073 out.go:177] * Starting "multinode-119391" primary control-plane node in "multinode-119391" cluster
	I0318 21:21:23.746929   38073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:21:23.746960   38073 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 21:21:23.746973   38073 cache.go:56] Caching tarball of preloaded images
	I0318 21:21:23.747042   38073 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 21:21:23.747054   38073 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 21:21:23.747182   38073 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/config.json ...
	I0318 21:21:23.747373   38073 start.go:360] acquireMachinesLock for multinode-119391: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:21:23.747444   38073 start.go:364] duration metric: took 52.931µs to acquireMachinesLock for "multinode-119391"
	I0318 21:21:23.747462   38073 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:21:23.747473   38073 fix.go:54] fixHost starting: 
	I0318 21:21:23.747718   38073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:21:23.747754   38073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:21:23.760690   38073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0318 21:21:23.761066   38073 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:21:23.761541   38073 main.go:141] libmachine: Using API Version  1
	I0318 21:21:23.761578   38073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:21:23.761890   38073 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:21:23.762075   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:21:23.762210   38073 main.go:141] libmachine: (multinode-119391) Calling .GetState
	I0318 21:21:23.763698   38073 fix.go:112] recreateIfNeeded on multinode-119391: state=Running err=<nil>
	W0318 21:21:23.763712   38073 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:21:23.765508   38073 out.go:177] * Updating the running kvm2 "multinode-119391" VM ...
	I0318 21:21:23.766624   38073 machine.go:94] provisionDockerMachine start ...
	I0318 21:21:23.766638   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:21:23.766814   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:23.769014   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.769434   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:23.769463   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.769550   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:23.769668   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.769813   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.769915   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:23.770040   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:23.770252   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:23.770266   38073 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:21:23.882968   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-119391
	
	I0318 21:21:23.882995   38073 main.go:141] libmachine: (multinode-119391) Calling .GetMachineName
	I0318 21:21:23.883239   38073 buildroot.go:166] provisioning hostname "multinode-119391"
	I0318 21:21:23.883269   38073 main.go:141] libmachine: (multinode-119391) Calling .GetMachineName
	I0318 21:21:23.883467   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:23.886313   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.886728   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:23.886747   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:23.886913   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:23.887101   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.887242   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:23.887344   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:23.887476   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:23.887661   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:23.887677   38073 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-119391 && echo "multinode-119391" | sudo tee /etc/hostname
	I0318 21:21:24.012201   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-119391
	
	I0318 21:21:24.012223   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.014918   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.015255   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.015295   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.015420   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:24.015613   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.015775   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.015917   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:24.016071   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:24.016224   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:24.016240   38073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-119391' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-119391/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-119391' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:21:24.126855   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:21:24.126893   38073 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:21:24.126912   38073 buildroot.go:174] setting up certificates
	I0318 21:21:24.126932   38073 provision.go:84] configureAuth start
	I0318 21:21:24.126943   38073 main.go:141] libmachine: (multinode-119391) Calling .GetMachineName
	I0318 21:21:24.127210   38073 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:21:24.129854   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.130230   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.130259   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.130389   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.132318   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.132669   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.132711   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.132845   38073 provision.go:143] copyHostCerts
	I0318 21:21:24.132878   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:21:24.132929   38073 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:21:24.132941   38073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:21:24.133019   38073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:21:24.133127   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:21:24.133162   38073 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:21:24.133172   38073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:21:24.133213   38073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:21:24.133267   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:21:24.133290   38073 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:21:24.133299   38073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:21:24.133330   38073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:21:24.133397   38073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.multinode-119391 san=[127.0.0.1 192.168.39.127 localhost minikube multinode-119391]
	I0318 21:21:24.228063   38073 provision.go:177] copyRemoteCerts
	I0318 21:21:24.228111   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:21:24.228130   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.230664   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.231008   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.231034   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.231217   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:24.231403   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.231562   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:24.231685   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:21:24.321664   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 21:21:24.321726   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:21:24.350602   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 21:21:24.350664   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 21:21:24.378769   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 21:21:24.378819   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:21:24.407470   38073 provision.go:87] duration metric: took 280.526167ms to configureAuth
	I0318 21:21:24.407503   38073 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:21:24.407734   38073 config.go:182] Loaded profile config "multinode-119391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:21:24.407820   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:21:24.410404   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.410861   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:21:24.410890   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:21:24.411062   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:21:24.411245   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.411418   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:21:24.411581   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:21:24.411731   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:21:24.411927   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:21:24.411962   38073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:22:55.260329   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:22:55.260362   38073 machine.go:97] duration metric: took 1m31.493724827s to provisionDockerMachine
	I0318 21:22:55.260379   38073 start.go:293] postStartSetup for "multinode-119391" (driver="kvm2")
	I0318 21:22:55.260393   38073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:22:55.260414   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.260740   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:22:55.260777   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.263473   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.263976   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.263995   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.264214   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.264402   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.264559   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.264703   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:22:55.354158   38073 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:22:55.358990   38073 command_runner.go:130] > NAME=Buildroot
	I0318 21:22:55.359009   38073 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 21:22:55.359015   38073 command_runner.go:130] > ID=buildroot
	I0318 21:22:55.359022   38073 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 21:22:55.359029   38073 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 21:22:55.359065   38073 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:22:55.359080   38073 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:22:55.359129   38073 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:22:55.359204   38073 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:22:55.359213   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /etc/ssl/certs/125682.pem
	I0318 21:22:55.359292   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:22:55.370269   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:22:55.396249   38073 start.go:296] duration metric: took 135.859934ms for postStartSetup
	I0318 21:22:55.396299   38073 fix.go:56] duration metric: took 1m31.648812988s for fixHost
	I0318 21:22:55.396323   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.398973   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.399358   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.399380   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.399560   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.399772   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.399984   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.400137   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.400280   38073 main.go:141] libmachine: Using SSH client type: native
	I0318 21:22:55.400442   38073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0318 21:22:55.400452   38073 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:22:55.505733   38073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710796975.479934840
	
	I0318 21:22:55.505756   38073 fix.go:216] guest clock: 1710796975.479934840
	I0318 21:22:55.505767   38073 fix.go:229] Guest: 2024-03-18 21:22:55.47993484 +0000 UTC Remote: 2024-03-18 21:22:55.396305072 +0000 UTC m=+91.781649066 (delta=83.629768ms)
	I0318 21:22:55.505798   38073 fix.go:200] guest clock delta is within tolerance: 83.629768ms
	I0318 21:22:55.505809   38073 start.go:83] releasing machines lock for "multinode-119391", held for 1m31.758354331s
	I0318 21:22:55.505857   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.506088   38073 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:22:55.508766   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.509179   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.509205   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.509336   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.509814   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.509997   38073 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:22:55.510090   38073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:22:55.510138   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.510179   38073 ssh_runner.go:195] Run: cat /version.json
	I0318 21:22:55.510200   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:22:55.512364   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.512669   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.512696   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.512715   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.512868   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.513038   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.513175   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.513204   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:55.513222   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:55.513389   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:22:55.513395   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:22:55.513533   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:22:55.513654   38073 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:22:55.513802   38073 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:22:55.598217   38073 command_runner.go:130] > {"iso_version": "v1.32.1-1710573846-18277", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "c68f4945cc664fefa1b332c623244b57043707c8"}
	I0318 21:22:55.598355   38073 ssh_runner.go:195] Run: systemctl --version
	I0318 21:22:55.621246   38073 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 21:22:55.621889   38073 command_runner.go:130] > systemd 252 (252)
	I0318 21:22:55.621937   38073 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 21:22:55.622003   38073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:22:55.784038   38073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 21:22:55.791214   38073 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 21:22:55.791262   38073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:22:55.791324   38073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:22:55.801079   38073 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 21:22:55.801097   38073 start.go:494] detecting cgroup driver to use...
	I0318 21:22:55.801152   38073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:22:55.819251   38073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:22:55.833420   38073 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:22:55.833456   38073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:22:55.847697   38073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:22:55.861655   38073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:22:56.029630   38073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:22:56.174607   38073 docker.go:233] disabling docker service ...
	I0318 21:22:56.174679   38073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:22:56.192806   38073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:22:56.208211   38073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:22:56.349922   38073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:22:56.488350   38073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:22:56.504512   38073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:22:56.525863   38073 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0318 21:22:56.525937   38073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:22:56.525992   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.538923   38073 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:22:56.539002   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.552106   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.566668   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.580278   38073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:22:56.593378   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.607467   38073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.619477   38073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:22:56.632173   38073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:22:56.643383   38073 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 21:22:56.643545   38073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:22:56.655572   38073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:22:56.796783   38073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:22:58.533986   38073 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.737166363s)
	I0318 21:22:58.534012   38073 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:22:58.534053   38073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:22:58.540169   38073 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0318 21:22:58.540200   38073 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 21:22:58.540211   38073 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0318 21:22:58.540222   38073 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 21:22:58.540233   38073 command_runner.go:130] > Access: 2024-03-18 21:22:58.398889373 +0000
	I0318 21:22:58.540244   38073 command_runner.go:130] > Modify: 2024-03-18 21:22:58.398889373 +0000
	I0318 21:22:58.540256   38073 command_runner.go:130] > Change: 2024-03-18 21:22:58.398889373 +0000
	I0318 21:22:58.540265   38073 command_runner.go:130] >  Birth: -
	I0318 21:22:58.540289   38073 start.go:562] Will wait 60s for crictl version
	I0318 21:22:58.540331   38073 ssh_runner.go:195] Run: which crictl
	I0318 21:22:58.544738   38073 command_runner.go:130] > /usr/bin/crictl
	I0318 21:22:58.544816   38073 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:22:58.583883   38073 command_runner.go:130] > Version:  0.1.0
	I0318 21:22:58.583901   38073 command_runner.go:130] > RuntimeName:  cri-o
	I0318 21:22:58.583906   38073 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0318 21:22:58.583911   38073 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 21:22:58.584031   38073 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:22:58.584111   38073 ssh_runner.go:195] Run: crio --version
	I0318 21:22:58.616790   38073 command_runner.go:130] > crio version 1.29.1
	I0318 21:22:58.616808   38073 command_runner.go:130] > Version:        1.29.1
	I0318 21:22:58.616813   38073 command_runner.go:130] > GitCommit:      unknown
	I0318 21:22:58.616817   38073 command_runner.go:130] > GitCommitDate:  unknown
	I0318 21:22:58.616822   38073 command_runner.go:130] > GitTreeState:   clean
	I0318 21:22:58.616834   38073 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0318 21:22:58.616838   38073 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 21:22:58.616842   38073 command_runner.go:130] > Compiler:       gc
	I0318 21:22:58.616847   38073 command_runner.go:130] > Platform:       linux/amd64
	I0318 21:22:58.616850   38073 command_runner.go:130] > Linkmode:       dynamic
	I0318 21:22:58.616855   38073 command_runner.go:130] > BuildTags:      
	I0318 21:22:58.616860   38073 command_runner.go:130] >   containers_image_ostree_stub
	I0318 21:22:58.616865   38073 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 21:22:58.616874   38073 command_runner.go:130] >   btrfs_noversion
	I0318 21:22:58.616884   38073 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 21:22:58.616894   38073 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 21:22:58.616913   38073 command_runner.go:130] >   seccomp
	I0318 21:22:58.616921   38073 command_runner.go:130] > LDFlags:          unknown
	I0318 21:22:58.616928   38073 command_runner.go:130] > SeccompEnabled:   true
	I0318 21:22:58.616933   38073 command_runner.go:130] > AppArmorEnabled:  false
	I0318 21:22:58.617017   38073 ssh_runner.go:195] Run: crio --version
	I0318 21:22:58.647621   38073 command_runner.go:130] > crio version 1.29.1
	I0318 21:22:58.647639   38073 command_runner.go:130] > Version:        1.29.1
	I0318 21:22:58.647644   38073 command_runner.go:130] > GitCommit:      unknown
	I0318 21:22:58.647649   38073 command_runner.go:130] > GitCommitDate:  unknown
	I0318 21:22:58.647653   38073 command_runner.go:130] > GitTreeState:   clean
	I0318 21:22:58.647658   38073 command_runner.go:130] > BuildDate:      2024-03-16T12:34:20Z
	I0318 21:22:58.647662   38073 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 21:22:58.647666   38073 command_runner.go:130] > Compiler:       gc
	I0318 21:22:58.647670   38073 command_runner.go:130] > Platform:       linux/amd64
	I0318 21:22:58.647674   38073 command_runner.go:130] > Linkmode:       dynamic
	I0318 21:22:58.647680   38073 command_runner.go:130] > BuildTags:      
	I0318 21:22:58.647684   38073 command_runner.go:130] >   containers_image_ostree_stub
	I0318 21:22:58.647688   38073 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 21:22:58.647692   38073 command_runner.go:130] >   btrfs_noversion
	I0318 21:22:58.647696   38073 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 21:22:58.647700   38073 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 21:22:58.647706   38073 command_runner.go:130] >   seccomp
	I0318 21:22:58.647710   38073 command_runner.go:130] > LDFlags:          unknown
	I0318 21:22:58.647717   38073 command_runner.go:130] > SeccompEnabled:   true
	I0318 21:22:58.647723   38073 command_runner.go:130] > AppArmorEnabled:  false
	I0318 21:22:58.649999   38073 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:22:58.651496   38073 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:22:58.654158   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:58.654520   38073 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:22:58.654541   38073 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:22:58.654714   38073 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:22:58.659207   38073 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0318 21:22:58.659513   38073 kubeadm.go:877] updating cluster {Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:22:58.659662   38073 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:22:58.659714   38073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:22:58.710807   38073 command_runner.go:130] > {
	I0318 21:22:58.710830   38073 command_runner.go:130] >   "images": [
	I0318 21:22:58.710837   38073 command_runner.go:130] >     {
	I0318 21:22:58.710849   38073 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 21:22:58.710859   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.710868   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 21:22:58.710874   38073 command_runner.go:130] >       ],
	I0318 21:22:58.710880   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.710894   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 21:22:58.710910   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 21:22:58.710914   38073 command_runner.go:130] >       ],
	I0318 21:22:58.710925   38073 command_runner.go:130] >       "size": "65258016",
	I0318 21:22:58.710931   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.710940   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.710947   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.710956   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.710960   38073 command_runner.go:130] >     },
	I0318 21:22:58.710965   38073 command_runner.go:130] >     {
	I0318 21:22:58.710976   38073 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 21:22:58.710988   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.710999   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 21:22:58.711007   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711012   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711025   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 21:22:58.711038   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 21:22:58.711046   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711055   38073 command_runner.go:130] >       "size": "65291810",
	I0318 21:22:58.711061   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711074   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711082   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711088   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711096   38073 command_runner.go:130] >     },
	I0318 21:22:58.711101   38073 command_runner.go:130] >     {
	I0318 21:22:58.711113   38073 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 21:22:58.711122   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711130   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 21:22:58.711138   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711147   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711158   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 21:22:58.711171   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 21:22:58.711177   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711185   38073 command_runner.go:130] >       "size": "1363676",
	I0318 21:22:58.711194   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711199   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711209   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711215   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711223   38073 command_runner.go:130] >     },
	I0318 21:22:58.711229   38073 command_runner.go:130] >     {
	I0318 21:22:58.711242   38073 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 21:22:58.711258   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711269   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 21:22:58.711275   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711284   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711296   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 21:22:58.711323   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 21:22:58.711339   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711346   38073 command_runner.go:130] >       "size": "31470524",
	I0318 21:22:58.711356   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711362   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711368   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711374   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711382   38073 command_runner.go:130] >     },
	I0318 21:22:58.711387   38073 command_runner.go:130] >     {
	I0318 21:22:58.711399   38073 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 21:22:58.711406   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711421   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 21:22:58.711429   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711436   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711450   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 21:22:58.711463   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 21:22:58.711471   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711478   38073 command_runner.go:130] >       "size": "53621675",
	I0318 21:22:58.711486   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.711493   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711502   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711509   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711518   38073 command_runner.go:130] >     },
	I0318 21:22:58.711525   38073 command_runner.go:130] >     {
	I0318 21:22:58.711537   38073 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 21:22:58.711545   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711555   38073 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 21:22:58.711562   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711569   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711583   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 21:22:58.711597   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 21:22:58.711605   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711611   38073 command_runner.go:130] >       "size": "295456551",
	I0318 21:22:58.711619   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.711625   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.711632   38073 command_runner.go:130] >       },
	I0318 21:22:58.711638   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711653   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711660   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711665   38073 command_runner.go:130] >     },
	I0318 21:22:58.711673   38073 command_runner.go:130] >     {
	I0318 21:22:58.711683   38073 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 21:22:58.711692   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711700   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 21:22:58.711710   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711720   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711731   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 21:22:58.711745   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 21:22:58.711753   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711759   38073 command_runner.go:130] >       "size": "127226832",
	I0318 21:22:58.711768   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.711774   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.711783   38073 command_runner.go:130] >       },
	I0318 21:22:58.711789   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711798   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711804   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711811   38073 command_runner.go:130] >     },
	I0318 21:22:58.711824   38073 command_runner.go:130] >     {
	I0318 21:22:58.711837   38073 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 21:22:58.711844   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.711852   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 21:22:58.711860   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711867   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.711898   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 21:22:58.711916   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 21:22:58.711923   38073 command_runner.go:130] >       ],
	I0318 21:22:58.711933   38073 command_runner.go:130] >       "size": "123261750",
	I0318 21:22:58.711938   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.711944   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.711953   38073 command_runner.go:130] >       },
	I0318 21:22:58.711959   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.711964   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.711971   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.711980   38073 command_runner.go:130] >     },
	I0318 21:22:58.711985   38073 command_runner.go:130] >     {
	I0318 21:22:58.711995   38073 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 21:22:58.712000   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.712012   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 21:22:58.712017   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712030   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.712041   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 21:22:58.712052   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 21:22:58.712062   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712068   38073 command_runner.go:130] >       "size": "74749335",
	I0318 21:22:58.712078   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.712083   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.712093   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.712099   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.712105   38073 command_runner.go:130] >     },
	I0318 21:22:58.712110   38073 command_runner.go:130] >     {
	I0318 21:22:58.712119   38073 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 21:22:58.712129   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.712139   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 21:22:58.712145   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712154   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.712169   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 21:22:58.712183   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 21:22:58.712189   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712196   38073 command_runner.go:130] >       "size": "61551410",
	I0318 21:22:58.712205   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.712211   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.712219   38073 command_runner.go:130] >       },
	I0318 21:22:58.712225   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.712233   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.712240   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.712249   38073 command_runner.go:130] >     },
	I0318 21:22:58.712259   38073 command_runner.go:130] >     {
	I0318 21:22:58.712271   38073 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 21:22:58.712280   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.712293   38073 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 21:22:58.712301   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712307   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.712321   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 21:22:58.712334   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 21:22:58.712343   38073 command_runner.go:130] >       ],
	I0318 21:22:58.712350   38073 command_runner.go:130] >       "size": "750414",
	I0318 21:22:58.712359   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.712365   38073 command_runner.go:130] >         "value": "65535"
	I0318 21:22:58.712371   38073 command_runner.go:130] >       },
	I0318 21:22:58.712374   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.712379   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.712385   38073 command_runner.go:130] >       "pinned": true
	I0318 21:22:58.712388   38073 command_runner.go:130] >     }
	I0318 21:22:58.712391   38073 command_runner.go:130] >   ]
	I0318 21:22:58.712394   38073 command_runner.go:130] > }
	I0318 21:22:58.712619   38073 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:22:58.712633   38073 crio.go:433] Images already preloaded, skipping extraction
	I0318 21:22:58.712673   38073 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:22:58.750102   38073 command_runner.go:130] > {
	I0318 21:22:58.750124   38073 command_runner.go:130] >   "images": [
	I0318 21:22:58.750131   38073 command_runner.go:130] >     {
	I0318 21:22:58.750140   38073 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 21:22:58.750152   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750160   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 21:22:58.750175   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750181   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750202   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 21:22:58.750214   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 21:22:58.750218   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750224   38073 command_runner.go:130] >       "size": "65258016",
	I0318 21:22:58.750231   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750235   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750248   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750258   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750267   38073 command_runner.go:130] >     },
	I0318 21:22:58.750276   38073 command_runner.go:130] >     {
	I0318 21:22:58.750289   38073 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 21:22:58.750298   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750310   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 21:22:58.750316   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750320   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750329   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 21:22:58.750344   38073 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 21:22:58.750359   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750366   38073 command_runner.go:130] >       "size": "65291810",
	I0318 21:22:58.750375   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750390   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750399   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750409   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750418   38073 command_runner.go:130] >     },
	I0318 21:22:58.750425   38073 command_runner.go:130] >     {
	I0318 21:22:58.750432   38073 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 21:22:58.750440   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750455   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 21:22:58.750474   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750485   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750500   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 21:22:58.750515   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 21:22:58.750524   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750533   38073 command_runner.go:130] >       "size": "1363676",
	I0318 21:22:58.750538   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750543   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750552   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750562   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750568   38073 command_runner.go:130] >     },
	I0318 21:22:58.750577   38073 command_runner.go:130] >     {
	I0318 21:22:58.750590   38073 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 21:22:58.750599   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750610   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 21:22:58.750618   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750627   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750638   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 21:22:58.750662   38073 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 21:22:58.750672   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750679   38073 command_runner.go:130] >       "size": "31470524",
	I0318 21:22:58.750688   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750698   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750708   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750717   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750727   38073 command_runner.go:130] >     },
	I0318 21:22:58.750735   38073 command_runner.go:130] >     {
	I0318 21:22:58.750746   38073 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 21:22:58.750755   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750766   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 21:22:58.750775   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750783   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750798   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 21:22:58.750813   38073 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 21:22:58.750821   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750831   38073 command_runner.go:130] >       "size": "53621675",
	I0318 21:22:58.750846   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.750855   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.750864   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.750874   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.750880   38073 command_runner.go:130] >     },
	I0318 21:22:58.750889   38073 command_runner.go:130] >     {
	I0318 21:22:58.750902   38073 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 21:22:58.750911   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.750922   38073 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 21:22:58.750931   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750940   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.750964   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 21:22:58.750979   38073 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 21:22:58.750990   38073 command_runner.go:130] >       ],
	I0318 21:22:58.750999   38073 command_runner.go:130] >       "size": "295456551",
	I0318 21:22:58.751008   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751018   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751027   38073 command_runner.go:130] >       },
	I0318 21:22:58.751037   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751045   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751049   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751057   38073 command_runner.go:130] >     },
	I0318 21:22:58.751069   38073 command_runner.go:130] >     {
	I0318 21:22:58.751082   38073 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 21:22:58.751092   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751104   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 21:22:58.751113   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751123   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751138   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 21:22:58.751151   38073 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 21:22:58.751158   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751165   38073 command_runner.go:130] >       "size": "127226832",
	I0318 21:22:58.751174   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751182   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751191   38073 command_runner.go:130] >       },
	I0318 21:22:58.751201   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751217   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751226   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751235   38073 command_runner.go:130] >     },
	I0318 21:22:58.751243   38073 command_runner.go:130] >     {
	I0318 21:22:58.751249   38073 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 21:22:58.751258   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751269   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 21:22:58.751279   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751288   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751317   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 21:22:58.751333   38073 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 21:22:58.751337   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751351   38073 command_runner.go:130] >       "size": "123261750",
	I0318 21:22:58.751361   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751368   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751377   38073 command_runner.go:130] >       },
	I0318 21:22:58.751386   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751395   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751404   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751412   38073 command_runner.go:130] >     },
	I0318 21:22:58.751418   38073 command_runner.go:130] >     {
	I0318 21:22:58.751431   38073 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 21:22:58.751441   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751449   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 21:22:58.751458   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751464   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751476   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 21:22:58.751491   38073 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 21:22:58.751499   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751505   38073 command_runner.go:130] >       "size": "74749335",
	I0318 21:22:58.751514   38073 command_runner.go:130] >       "uid": null,
	I0318 21:22:58.751530   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751539   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751546   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751555   38073 command_runner.go:130] >     },
	I0318 21:22:58.751560   38073 command_runner.go:130] >     {
	I0318 21:22:58.751579   38073 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 21:22:58.751589   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751598   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 21:22:58.751606   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751614   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751628   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 21:22:58.751643   38073 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 21:22:58.751651   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751661   38073 command_runner.go:130] >       "size": "61551410",
	I0318 21:22:58.751670   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751677   38073 command_runner.go:130] >         "value": "0"
	I0318 21:22:58.751686   38073 command_runner.go:130] >       },
	I0318 21:22:58.751695   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751703   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751712   38073 command_runner.go:130] >       "pinned": false
	I0318 21:22:58.751720   38073 command_runner.go:130] >     },
	I0318 21:22:58.751729   38073 command_runner.go:130] >     {
	I0318 21:22:58.751739   38073 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 21:22:58.751749   38073 command_runner.go:130] >       "repoTags": [
	I0318 21:22:58.751760   38073 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 21:22:58.751766   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751776   38073 command_runner.go:130] >       "repoDigests": [
	I0318 21:22:58.751791   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 21:22:58.751805   38073 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 21:22:58.751814   38073 command_runner.go:130] >       ],
	I0318 21:22:58.751822   38073 command_runner.go:130] >       "size": "750414",
	I0318 21:22:58.751826   38073 command_runner.go:130] >       "uid": {
	I0318 21:22:58.751836   38073 command_runner.go:130] >         "value": "65535"
	I0318 21:22:58.751845   38073 command_runner.go:130] >       },
	I0318 21:22:58.751855   38073 command_runner.go:130] >       "username": "",
	I0318 21:22:58.751865   38073 command_runner.go:130] >       "spec": null,
	I0318 21:22:58.751874   38073 command_runner.go:130] >       "pinned": true
	I0318 21:22:58.751882   38073 command_runner.go:130] >     }
	I0318 21:22:58.751890   38073 command_runner.go:130] >   ]
	I0318 21:22:58.751895   38073 command_runner.go:130] > }
	I0318 21:22:58.752109   38073 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:22:58.752126   38073 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:22:58.752140   38073 kubeadm.go:928] updating node { 192.168.39.127 8443 v1.28.4 crio true true} ...
	I0318 21:22:58.752303   38073 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-119391 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:22:58.752396   38073 ssh_runner.go:195] Run: crio config
	I0318 21:22:58.805509   38073 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0318 21:22:58.805544   38073 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0318 21:22:58.805555   38073 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0318 21:22:58.805560   38073 command_runner.go:130] > #
	I0318 21:22:58.805571   38073 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0318 21:22:58.805582   38073 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0318 21:22:58.805598   38073 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0318 21:22:58.805609   38073 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0318 21:22:58.805618   38073 command_runner.go:130] > # reload'.
	I0318 21:22:58.805627   38073 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0318 21:22:58.805641   38073 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0318 21:22:58.805655   38073 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0318 21:22:58.805665   38073 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0318 21:22:58.805673   38073 command_runner.go:130] > [crio]
	I0318 21:22:58.805682   38073 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0318 21:22:58.805694   38073 command_runner.go:130] > # containers images, in this directory.
	I0318 21:22:58.805702   38073 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0318 21:22:58.805721   38073 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0318 21:22:58.805731   38073 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0318 21:22:58.805743   38073 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0318 21:22:58.805753   38073 command_runner.go:130] > # imagestore = ""
	I0318 21:22:58.805762   38073 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0318 21:22:58.805775   38073 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0318 21:22:58.805786   38073 command_runner.go:130] > storage_driver = "overlay"
	I0318 21:22:58.805795   38073 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0318 21:22:58.805808   38073 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0318 21:22:58.805818   38073 command_runner.go:130] > storage_option = [
	I0318 21:22:58.805829   38073 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0318 21:22:58.805838   38073 command_runner.go:130] > ]
	I0318 21:22:58.805848   38073 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0318 21:22:58.805861   38073 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0318 21:22:58.805878   38073 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0318 21:22:58.805890   38073 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0318 21:22:58.805903   38073 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0318 21:22:58.805913   38073 command_runner.go:130] > # always happen on a node reboot
	I0318 21:22:58.805929   38073 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0318 21:22:58.805949   38073 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0318 21:22:58.805968   38073 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0318 21:22:58.805979   38073 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0318 21:22:58.805990   38073 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0318 21:22:58.806006   38073 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0318 21:22:58.806023   38073 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0318 21:22:58.806033   38073 command_runner.go:130] > # internal_wipe = true
	I0318 21:22:58.806045   38073 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0318 21:22:58.806057   38073 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0318 21:22:58.806067   38073 command_runner.go:130] > # internal_repair = false
	I0318 21:22:58.806074   38073 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0318 21:22:58.806086   38073 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0318 21:22:58.806098   38073 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0318 21:22:58.806106   38073 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0318 21:22:58.806118   38073 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0318 21:22:58.806126   38073 command_runner.go:130] > [crio.api]
	I0318 21:22:58.806134   38073 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0318 21:22:58.806145   38073 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0318 21:22:58.806155   38073 command_runner.go:130] > # IP address on which the stream server will listen.
	I0318 21:22:58.806162   38073 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0318 21:22:58.806175   38073 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0318 21:22:58.806185   38073 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0318 21:22:58.806196   38073 command_runner.go:130] > # stream_port = "0"
	I0318 21:22:58.806206   38073 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0318 21:22:58.806216   38073 command_runner.go:130] > # stream_enable_tls = false
	I0318 21:22:58.806224   38073 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0318 21:22:58.806234   38073 command_runner.go:130] > # stream_idle_timeout = ""
	I0318 21:22:58.806243   38073 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0318 21:22:58.806255   38073 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0318 21:22:58.806263   38073 command_runner.go:130] > # minutes.
	I0318 21:22:58.806269   38073 command_runner.go:130] > # stream_tls_cert = ""
	I0318 21:22:58.806288   38073 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0318 21:22:58.806301   38073 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0318 21:22:58.806310   38073 command_runner.go:130] > # stream_tls_key = ""
	I0318 21:22:58.806319   38073 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0318 21:22:58.806335   38073 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0318 21:22:58.806369   38073 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0318 21:22:58.806379   38073 command_runner.go:130] > # stream_tls_ca = ""
	I0318 21:22:58.806390   38073 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 21:22:58.806400   38073 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0318 21:22:58.806411   38073 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 21:22:58.806422   38073 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0318 21:22:58.806447   38073 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0318 21:22:58.806460   38073 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0318 21:22:58.806469   38073 command_runner.go:130] > [crio.runtime]
	I0318 21:22:58.806479   38073 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0318 21:22:58.806491   38073 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0318 21:22:58.806502   38073 command_runner.go:130] > # "nofile=1024:2048"
	I0318 21:22:58.806515   38073 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0318 21:22:58.806524   38073 command_runner.go:130] > # default_ulimits = [
	I0318 21:22:58.806532   38073 command_runner.go:130] > # ]
	I0318 21:22:58.806546   38073 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0318 21:22:58.806555   38073 command_runner.go:130] > # no_pivot = false
	I0318 21:22:58.806568   38073 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0318 21:22:58.806582   38073 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0318 21:22:58.806592   38073 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0318 21:22:58.806600   38073 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0318 21:22:58.806611   38073 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0318 21:22:58.806624   38073 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 21:22:58.806634   38073 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0318 21:22:58.806640   38073 command_runner.go:130] > # Cgroup setting for conmon
	I0318 21:22:58.806653   38073 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0318 21:22:58.806660   38073 command_runner.go:130] > conmon_cgroup = "pod"
	I0318 21:22:58.806673   38073 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0318 21:22:58.806684   38073 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0318 21:22:58.806698   38073 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 21:22:58.806707   38073 command_runner.go:130] > conmon_env = [
	I0318 21:22:58.806733   38073 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 21:22:58.806742   38073 command_runner.go:130] > ]
	I0318 21:22:58.806750   38073 command_runner.go:130] > # Additional environment variables to set for all the
	I0318 21:22:58.806761   38073 command_runner.go:130] > # containers. These are overridden if set in the
	I0318 21:22:58.806774   38073 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0318 21:22:58.806786   38073 command_runner.go:130] > # default_env = [
	I0318 21:22:58.806794   38073 command_runner.go:130] > # ]
	I0318 21:22:58.806803   38073 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0318 21:22:58.806819   38073 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0318 21:22:58.806828   38073 command_runner.go:130] > # selinux = false
	I0318 21:22:58.806839   38073 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0318 21:22:58.806852   38073 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0318 21:22:58.806861   38073 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0318 21:22:58.806871   38073 command_runner.go:130] > # seccomp_profile = ""
	I0318 21:22:58.806879   38073 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0318 21:22:58.806895   38073 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0318 21:22:58.806908   38073 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0318 21:22:58.806918   38073 command_runner.go:130] > # which might increase security.
	I0318 21:22:58.806925   38073 command_runner.go:130] > # This option is currently deprecated,
	I0318 21:22:58.806937   38073 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0318 21:22:58.806947   38073 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0318 21:22:58.806957   38073 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0318 21:22:58.806970   38073 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0318 21:22:58.806981   38073 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0318 21:22:58.806994   38073 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0318 21:22:58.807005   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.807012   38073 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0318 21:22:58.807020   38073 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0318 21:22:58.807030   38073 command_runner.go:130] > # the cgroup blockio controller.
	I0318 21:22:58.807036   38073 command_runner.go:130] > # blockio_config_file = ""
	I0318 21:22:58.807047   38073 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0318 21:22:58.807056   38073 command_runner.go:130] > # blockio parameters.
	I0318 21:22:58.807065   38073 command_runner.go:130] > # blockio_reload = false
	I0318 21:22:58.807079   38073 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0318 21:22:58.807085   38073 command_runner.go:130] > # irqbalance daemon.
	I0318 21:22:58.807094   38073 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0318 21:22:58.807114   38073 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0318 21:22:58.807128   38073 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0318 21:22:58.807141   38073 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0318 21:22:58.807153   38073 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0318 21:22:58.807165   38073 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0318 21:22:58.807175   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.807190   38073 command_runner.go:130] > # rdt_config_file = ""
	I0318 21:22:58.807201   38073 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0318 21:22:58.807210   38073 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0318 21:22:58.807276   38073 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0318 21:22:58.807292   38073 command_runner.go:130] > # separate_pull_cgroup = ""
	I0318 21:22:58.807303   38073 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0318 21:22:58.807317   38073 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0318 21:22:58.807326   38073 command_runner.go:130] > # will be added.
	I0318 21:22:58.807333   38073 command_runner.go:130] > # default_capabilities = [
	I0318 21:22:58.807342   38073 command_runner.go:130] > # 	"CHOWN",
	I0318 21:22:58.807348   38073 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0318 21:22:58.807361   38073 command_runner.go:130] > # 	"FSETID",
	I0318 21:22:58.807370   38073 command_runner.go:130] > # 	"FOWNER",
	I0318 21:22:58.807376   38073 command_runner.go:130] > # 	"SETGID",
	I0318 21:22:58.807384   38073 command_runner.go:130] > # 	"SETUID",
	I0318 21:22:58.807390   38073 command_runner.go:130] > # 	"SETPCAP",
	I0318 21:22:58.807399   38073 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0318 21:22:58.807404   38073 command_runner.go:130] > # 	"KILL",
	I0318 21:22:58.807413   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807424   38073 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0318 21:22:58.807437   38073 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0318 21:22:58.807447   38073 command_runner.go:130] > # add_inheritable_capabilities = false
	I0318 21:22:58.807460   38073 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0318 21:22:58.807472   38073 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 21:22:58.807478   38073 command_runner.go:130] > default_sysctls = [
	I0318 21:22:58.807488   38073 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0318 21:22:58.807492   38073 command_runner.go:130] > ]
	I0318 21:22:58.807502   38073 command_runner.go:130] > # List of devices on the host that a
	I0318 21:22:58.807512   38073 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0318 21:22:58.807521   38073 command_runner.go:130] > # allowed_devices = [
	I0318 21:22:58.807535   38073 command_runner.go:130] > # 	"/dev/fuse",
	I0318 21:22:58.807544   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807552   38073 command_runner.go:130] > # List of additional devices. specified as
	I0318 21:22:58.807568   38073 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0318 21:22:58.807580   38073 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0318 21:22:58.807593   38073 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 21:22:58.807603   38073 command_runner.go:130] > # additional_devices = [
	I0318 21:22:58.807608   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807621   38073 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0318 21:22:58.807630   38073 command_runner.go:130] > # cdi_spec_dirs = [
	I0318 21:22:58.807636   38073 command_runner.go:130] > # 	"/etc/cdi",
	I0318 21:22:58.807645   38073 command_runner.go:130] > # 	"/var/run/cdi",
	I0318 21:22:58.807650   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807664   38073 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0318 21:22:58.807681   38073 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0318 21:22:58.807691   38073 command_runner.go:130] > # Defaults to false.
	I0318 21:22:58.807699   38073 command_runner.go:130] > # device_ownership_from_security_context = false
	I0318 21:22:58.807713   38073 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0318 21:22:58.807725   38073 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0318 21:22:58.807734   38073 command_runner.go:130] > # hooks_dir = [
	I0318 21:22:58.807742   38073 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0318 21:22:58.807750   38073 command_runner.go:130] > # ]
	I0318 21:22:58.807760   38073 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0318 21:22:58.807772   38073 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0318 21:22:58.807784   38073 command_runner.go:130] > # its default mounts from the following two files:
	I0318 21:22:58.807792   38073 command_runner.go:130] > #
	I0318 21:22:58.807800   38073 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0318 21:22:58.807812   38073 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0318 21:22:58.807824   38073 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0318 21:22:58.807832   38073 command_runner.go:130] > #
	I0318 21:22:58.807840   38073 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0318 21:22:58.807853   38073 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0318 21:22:58.807862   38073 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0318 21:22:58.807873   38073 command_runner.go:130] > #      only add mounts it finds in this file.
	I0318 21:22:58.807877   38073 command_runner.go:130] > #
	I0318 21:22:58.807884   38073 command_runner.go:130] > # default_mounts_file = ""
	I0318 21:22:58.807902   38073 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0318 21:22:58.807915   38073 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0318 21:22:58.807924   38073 command_runner.go:130] > pids_limit = 1024
	I0318 21:22:58.807933   38073 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0318 21:22:58.807947   38073 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0318 21:22:58.807961   38073 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0318 21:22:58.807978   38073 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0318 21:22:58.807988   38073 command_runner.go:130] > # log_size_max = -1
	I0318 21:22:58.808001   38073 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0318 21:22:58.808010   38073 command_runner.go:130] > # log_to_journald = false
	I0318 21:22:58.808022   38073 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0318 21:22:58.808033   38073 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0318 21:22:58.808041   38073 command_runner.go:130] > # Path to directory for container attach sockets.
	I0318 21:22:58.808053   38073 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0318 21:22:58.808063   38073 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0318 21:22:58.808073   38073 command_runner.go:130] > # bind_mount_prefix = ""
	I0318 21:22:58.808085   38073 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0318 21:22:58.808095   38073 command_runner.go:130] > # read_only = false
	I0318 21:22:58.808104   38073 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0318 21:22:58.808116   38073 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0318 21:22:58.808125   38073 command_runner.go:130] > # live configuration reload.
	I0318 21:22:58.808132   38073 command_runner.go:130] > # log_level = "info"
	I0318 21:22:58.808143   38073 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0318 21:22:58.808151   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.808160   38073 command_runner.go:130] > # log_filter = ""
	I0318 21:22:58.808169   38073 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0318 21:22:58.808182   38073 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0318 21:22:58.808191   38073 command_runner.go:130] > # separated by comma.
	I0318 21:22:58.808202   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808211   38073 command_runner.go:130] > # uid_mappings = ""
	I0318 21:22:58.808223   38073 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0318 21:22:58.808236   38073 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0318 21:22:58.808245   38073 command_runner.go:130] > # separated by comma.
	I0318 21:22:58.808255   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808265   38073 command_runner.go:130] > # gid_mappings = ""
	I0318 21:22:58.808276   38073 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0318 21:22:58.808296   38073 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 21:22:58.808307   38073 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 21:22:58.808321   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808332   38073 command_runner.go:130] > # minimum_mappable_uid = -1
	I0318 21:22:58.808341   38073 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0318 21:22:58.808354   38073 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 21:22:58.808372   38073 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 21:22:58.808388   38073 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 21:22:58.808399   38073 command_runner.go:130] > # minimum_mappable_gid = -1
	I0318 21:22:58.808411   38073 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0318 21:22:58.808424   38073 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0318 21:22:58.808438   38073 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0318 21:22:58.808448   38073 command_runner.go:130] > # ctr_stop_timeout = 30
	I0318 21:22:58.808458   38073 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0318 21:22:58.808469   38073 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0318 21:22:58.808482   38073 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0318 21:22:58.808493   38073 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0318 21:22:58.808502   38073 command_runner.go:130] > drop_infra_ctr = false
	I0318 21:22:58.808515   38073 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0318 21:22:58.808527   38073 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0318 21:22:58.808540   38073 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0318 21:22:58.808549   38073 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0318 21:22:58.808559   38073 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0318 21:22:58.808571   38073 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0318 21:22:58.808585   38073 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0318 21:22:58.808596   38073 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0318 21:22:58.808607   38073 command_runner.go:130] > # shared_cpuset = ""
	I0318 21:22:58.808619   38073 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0318 21:22:58.808630   38073 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0318 21:22:58.808640   38073 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0318 21:22:58.808655   38073 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0318 21:22:58.808665   38073 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0318 21:22:58.808676   38073 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0318 21:22:58.808696   38073 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0318 21:22:58.808706   38073 command_runner.go:130] > # enable_criu_support = false
	I0318 21:22:58.808716   38073 command_runner.go:130] > # Enable/disable the generation of the container,
	I0318 21:22:58.808735   38073 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0318 21:22:58.808745   38073 command_runner.go:130] > # enable_pod_events = false
	I0318 21:22:58.808757   38073 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 21:22:58.808769   38073 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 21:22:58.808787   38073 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0318 21:22:58.808796   38073 command_runner.go:130] > # default_runtime = "runc"
	I0318 21:22:58.808814   38073 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0318 21:22:58.808830   38073 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0318 21:22:58.808848   38073 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0318 21:22:58.808859   38073 command_runner.go:130] > # creation as a file is not desired either.
	I0318 21:22:58.808874   38073 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0318 21:22:58.808885   38073 command_runner.go:130] > # the hostname is being managed dynamically.
	I0318 21:22:58.808895   38073 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0318 21:22:58.808900   38073 command_runner.go:130] > # ]
	I0318 21:22:58.808928   38073 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0318 21:22:58.808942   38073 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0318 21:22:58.808955   38073 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0318 21:22:58.808967   38073 command_runner.go:130] > # Each entry in the table should follow the format:
	I0318 21:22:58.808975   38073 command_runner.go:130] > #
	I0318 21:22:58.808986   38073 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0318 21:22:58.808997   38073 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0318 21:22:58.809061   38073 command_runner.go:130] > # runtime_type = "oci"
	I0318 21:22:58.809073   38073 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0318 21:22:58.809080   38073 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0318 21:22:58.809087   38073 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0318 21:22:58.809093   38073 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0318 21:22:58.809103   38073 command_runner.go:130] > # monitor_env = []
	I0318 21:22:58.809111   38073 command_runner.go:130] > # privileged_without_host_devices = false
	I0318 21:22:58.809120   38073 command_runner.go:130] > # allowed_annotations = []
	I0318 21:22:58.809128   38073 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0318 21:22:58.809137   38073 command_runner.go:130] > # Where:
	I0318 21:22:58.809145   38073 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0318 21:22:58.809157   38073 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0318 21:22:58.809169   38073 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0318 21:22:58.809181   38073 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0318 21:22:58.809190   38073 command_runner.go:130] > #   in $PATH.
	I0318 21:22:58.809205   38073 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0318 21:22:58.809218   38073 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0318 21:22:58.809232   38073 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0318 21:22:58.809241   38073 command_runner.go:130] > #   state.
	I0318 21:22:58.809253   38073 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0318 21:22:58.809265   38073 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0318 21:22:58.809278   38073 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0318 21:22:58.809290   38073 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0318 21:22:58.809299   38073 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0318 21:22:58.809312   38073 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0318 21:22:58.809322   38073 command_runner.go:130] > #   The currently recognized values are:
	I0318 21:22:58.809337   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0318 21:22:58.809353   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0318 21:22:58.809371   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0318 21:22:58.809382   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0318 21:22:58.809398   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0318 21:22:58.809415   38073 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0318 21:22:58.809428   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0318 21:22:58.809438   38073 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0318 21:22:58.809447   38073 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0318 21:22:58.809459   38073 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0318 21:22:58.809468   38073 command_runner.go:130] > #   deprecated option "conmon".
	I0318 21:22:58.809478   38073 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0318 21:22:58.809488   38073 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0318 21:22:58.809497   38073 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0318 21:22:58.809506   38073 command_runner.go:130] > #   should be moved to the container's cgroup
	I0318 21:22:58.809516   38073 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0318 21:22:58.809527   38073 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0318 21:22:58.809538   38073 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0318 21:22:58.809549   38073 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0318 21:22:58.809555   38073 command_runner.go:130] > #
	I0318 21:22:58.809562   38073 command_runner.go:130] > # Using the seccomp notifier feature:
	I0318 21:22:58.809569   38073 command_runner.go:130] > #
	I0318 21:22:58.809577   38073 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0318 21:22:58.809590   38073 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0318 21:22:58.809595   38073 command_runner.go:130] > #
	I0318 21:22:58.809614   38073 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0318 21:22:58.809627   38073 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0318 21:22:58.809634   38073 command_runner.go:130] > #
	I0318 21:22:58.809642   38073 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0318 21:22:58.809650   38073 command_runner.go:130] > # feature.
	I0318 21:22:58.809655   38073 command_runner.go:130] > #
	I0318 21:22:58.809667   38073 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0318 21:22:58.809681   38073 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0318 21:22:58.809693   38073 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0318 21:22:58.809702   38073 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0318 21:22:58.809714   38073 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0318 21:22:58.809722   38073 command_runner.go:130] > #
	I0318 21:22:58.809732   38073 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0318 21:22:58.809744   38073 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0318 21:22:58.809752   38073 command_runner.go:130] > #
	I0318 21:22:58.809761   38073 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0318 21:22:58.809776   38073 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0318 21:22:58.809784   38073 command_runner.go:130] > #
	I0318 21:22:58.809794   38073 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0318 21:22:58.809807   38073 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0318 21:22:58.809816   38073 command_runner.go:130] > # limitation.
	I0318 21:22:58.809822   38073 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0318 21:22:58.809832   38073 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0318 21:22:58.809844   38073 command_runner.go:130] > runtime_type = "oci"
	I0318 21:22:58.809853   38073 command_runner.go:130] > runtime_root = "/run/runc"
	I0318 21:22:58.809862   38073 command_runner.go:130] > runtime_config_path = ""
	I0318 21:22:58.809870   38073 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0318 21:22:58.809876   38073 command_runner.go:130] > monitor_cgroup = "pod"
	I0318 21:22:58.809884   38073 command_runner.go:130] > monitor_exec_cgroup = ""
	I0318 21:22:58.809890   38073 command_runner.go:130] > monitor_env = [
	I0318 21:22:58.809901   38073 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 21:22:58.809906   38073 command_runner.go:130] > ]
	I0318 21:22:58.809916   38073 command_runner.go:130] > privileged_without_host_devices = false
	I0318 21:22:58.809927   38073 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0318 21:22:58.809938   38073 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0318 21:22:58.809950   38073 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0318 21:22:58.809971   38073 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0318 21:22:58.809988   38073 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0318 21:22:58.809999   38073 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0318 21:22:58.810020   38073 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0318 21:22:58.810035   38073 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0318 21:22:58.810046   38073 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0318 21:22:58.810060   38073 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0318 21:22:58.810069   38073 command_runner.go:130] > # Example:
	I0318 21:22:58.810076   38073 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0318 21:22:58.810081   38073 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0318 21:22:58.810088   38073 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0318 21:22:58.810095   38073 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0318 21:22:58.810101   38073 command_runner.go:130] > # cpuset = 0
	I0318 21:22:58.810106   38073 command_runner.go:130] > # cpushares = "0-1"
	I0318 21:22:58.810111   38073 command_runner.go:130] > # Where:
	I0318 21:22:58.810117   38073 command_runner.go:130] > # The workload name is workload-type.
	I0318 21:22:58.810127   38073 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0318 21:22:58.810135   38073 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0318 21:22:58.810143   38073 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0318 21:22:58.810156   38073 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0318 21:22:58.810164   38073 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0318 21:22:58.810172   38073 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0318 21:22:58.810182   38073 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0318 21:22:58.810188   38073 command_runner.go:130] > # Default value is set to true
	I0318 21:22:58.810194   38073 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0318 21:22:58.810203   38073 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0318 21:22:58.810210   38073 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0318 21:22:58.810216   38073 command_runner.go:130] > # Default value is set to 'false'
	I0318 21:22:58.810223   38073 command_runner.go:130] > # disable_hostport_mapping = false
	I0318 21:22:58.810236   38073 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0318 21:22:58.810245   38073 command_runner.go:130] > #
	I0318 21:22:58.810257   38073 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0318 21:22:58.810269   38073 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0318 21:22:58.810282   38073 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0318 21:22:58.810295   38073 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0318 21:22:58.810307   38073 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0318 21:22:58.810322   38073 command_runner.go:130] > [crio.image]
	I0318 21:22:58.810336   38073 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0318 21:22:58.810345   38073 command_runner.go:130] > # default_transport = "docker://"
	I0318 21:22:58.810364   38073 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0318 21:22:58.810376   38073 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0318 21:22:58.810385   38073 command_runner.go:130] > # global_auth_file = ""
	I0318 21:22:58.810397   38073 command_runner.go:130] > # The image used to instantiate infra containers.
	I0318 21:22:58.810408   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.810420   38073 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0318 21:22:58.810439   38073 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0318 21:22:58.810451   38073 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0318 21:22:58.810463   38073 command_runner.go:130] > # This option supports live configuration reload.
	I0318 21:22:58.810473   38073 command_runner.go:130] > # pause_image_auth_file = ""
	I0318 21:22:58.810485   38073 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0318 21:22:58.810498   38073 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0318 21:22:58.810510   38073 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0318 21:22:58.810523   38073 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0318 21:22:58.810533   38073 command_runner.go:130] > # pause_command = "/pause"
	I0318 21:22:58.810546   38073 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0318 21:22:58.810566   38073 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0318 21:22:58.810578   38073 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0318 21:22:58.810590   38073 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0318 21:22:58.810603   38073 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0318 21:22:58.810616   38073 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0318 21:22:58.810626   38073 command_runner.go:130] > # pinned_images = [
	I0318 21:22:58.810635   38073 command_runner.go:130] > # ]
	I0318 21:22:58.810648   38073 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0318 21:22:58.810660   38073 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0318 21:22:58.810673   38073 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0318 21:22:58.810685   38073 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0318 21:22:58.810696   38073 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0318 21:22:58.810705   38073 command_runner.go:130] > # signature_policy = ""
	I0318 21:22:58.810716   38073 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0318 21:22:58.810729   38073 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0318 21:22:58.810740   38073 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0318 21:22:58.810752   38073 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0318 21:22:58.810769   38073 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0318 21:22:58.810780   38073 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0318 21:22:58.810794   38073 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0318 21:22:58.810807   38073 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0318 21:22:58.810816   38073 command_runner.go:130] > # changing them here.
	I0318 21:22:58.810827   38073 command_runner.go:130] > # insecure_registries = [
	I0318 21:22:58.810835   38073 command_runner.go:130] > # ]
	I0318 21:22:58.810849   38073 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0318 21:22:58.810859   38073 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0318 21:22:58.810869   38073 command_runner.go:130] > # image_volumes = "mkdir"
	I0318 21:22:58.810877   38073 command_runner.go:130] > # Temporary directory to use for storing big files
	I0318 21:22:58.810888   38073 command_runner.go:130] > # big_files_temporary_dir = ""
	I0318 21:22:58.810899   38073 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0318 21:22:58.810908   38073 command_runner.go:130] > # CNI plugins.
	I0318 21:22:58.810915   38073 command_runner.go:130] > [crio.network]
	I0318 21:22:58.810927   38073 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0318 21:22:58.810937   38073 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0318 21:22:58.810946   38073 command_runner.go:130] > # cni_default_network = ""
	I0318 21:22:58.810957   38073 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0318 21:22:58.810967   38073 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0318 21:22:58.810981   38073 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0318 21:22:58.810989   38073 command_runner.go:130] > # plugin_dirs = [
	I0318 21:22:58.810995   38073 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0318 21:22:58.811004   38073 command_runner.go:130] > # ]
	I0318 21:22:58.811012   38073 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0318 21:22:58.811020   38073 command_runner.go:130] > [crio.metrics]
	I0318 21:22:58.811030   38073 command_runner.go:130] > # Globally enable or disable metrics support.
	I0318 21:22:58.811038   38073 command_runner.go:130] > enable_metrics = true
	I0318 21:22:58.811045   38073 command_runner.go:130] > # Specify enabled metrics collectors.
	I0318 21:22:58.811054   38073 command_runner.go:130] > # Per default all metrics are enabled.
	I0318 21:22:58.811065   38073 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0318 21:22:58.811077   38073 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0318 21:22:58.811088   38073 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0318 21:22:58.811097   38073 command_runner.go:130] > # metrics_collectors = [
	I0318 21:22:58.811104   38073 command_runner.go:130] > # 	"operations",
	I0318 21:22:58.811114   38073 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0318 21:22:58.811130   38073 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0318 21:22:58.811140   38073 command_runner.go:130] > # 	"operations_errors",
	I0318 21:22:58.811146   38073 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0318 21:22:58.811155   38073 command_runner.go:130] > # 	"image_pulls_by_name",
	I0318 21:22:58.811165   38073 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0318 21:22:58.811174   38073 command_runner.go:130] > # 	"image_pulls_failures",
	I0318 21:22:58.811183   38073 command_runner.go:130] > # 	"image_pulls_successes",
	I0318 21:22:58.811193   38073 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0318 21:22:58.811202   38073 command_runner.go:130] > # 	"image_layer_reuse",
	I0318 21:22:58.811212   38073 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0318 21:22:58.811221   38073 command_runner.go:130] > # 	"containers_oom_total",
	I0318 21:22:58.811229   38073 command_runner.go:130] > # 	"containers_oom",
	I0318 21:22:58.811235   38073 command_runner.go:130] > # 	"processes_defunct",
	I0318 21:22:58.811243   38073 command_runner.go:130] > # 	"operations_total",
	I0318 21:22:58.811253   38073 command_runner.go:130] > # 	"operations_latency_seconds",
	I0318 21:22:58.811262   38073 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0318 21:22:58.811272   38073 command_runner.go:130] > # 	"operations_errors_total",
	I0318 21:22:58.811282   38073 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0318 21:22:58.811291   38073 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0318 21:22:58.811302   38073 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0318 21:22:58.811312   38073 command_runner.go:130] > # 	"image_pulls_success_total",
	I0318 21:22:58.811319   38073 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0318 21:22:58.811328   38073 command_runner.go:130] > # 	"containers_oom_count_total",
	I0318 21:22:58.811338   38073 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0318 21:22:58.811347   38073 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0318 21:22:58.811355   38073 command_runner.go:130] > # ]
	I0318 21:22:58.811371   38073 command_runner.go:130] > # The port on which the metrics server will listen.
	I0318 21:22:58.811380   38073 command_runner.go:130] > # metrics_port = 9090
	I0318 21:22:58.811392   38073 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0318 21:22:58.811401   38073 command_runner.go:130] > # metrics_socket = ""
	I0318 21:22:58.811415   38073 command_runner.go:130] > # The certificate for the secure metrics server.
	I0318 21:22:58.811425   38073 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0318 21:22:58.811437   38073 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0318 21:22:58.811447   38073 command_runner.go:130] > # certificate on any modification event.
	I0318 21:22:58.811456   38073 command_runner.go:130] > # metrics_cert = ""
	I0318 21:22:58.811467   38073 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0318 21:22:58.811484   38073 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0318 21:22:58.811493   38073 command_runner.go:130] > # metrics_key = ""
	I0318 21:22:58.811500   38073 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0318 21:22:58.811508   38073 command_runner.go:130] > [crio.tracing]
	I0318 21:22:58.811525   38073 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0318 21:22:58.811533   38073 command_runner.go:130] > # enable_tracing = false
	I0318 21:22:58.811550   38073 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0318 21:22:58.811560   38073 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0318 21:22:58.811574   38073 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0318 21:22:58.811583   38073 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0318 21:22:58.811589   38073 command_runner.go:130] > # CRI-O NRI configuration.
	I0318 21:22:58.811597   38073 command_runner.go:130] > [crio.nri]
	I0318 21:22:58.811603   38073 command_runner.go:130] > # Globally enable or disable NRI.
	I0318 21:22:58.811611   38073 command_runner.go:130] > # enable_nri = false
	I0318 21:22:58.811620   38073 command_runner.go:130] > # NRI socket to listen on.
	I0318 21:22:58.811629   38073 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0318 21:22:58.811638   38073 command_runner.go:130] > # NRI plugin directory to use.
	I0318 21:22:58.811649   38073 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0318 21:22:58.811659   38073 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0318 21:22:58.811670   38073 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0318 21:22:58.811677   38073 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0318 21:22:58.811686   38073 command_runner.go:130] > # nri_disable_connections = false
	I0318 21:22:58.811696   38073 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0318 21:22:58.811706   38073 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0318 21:22:58.811716   38073 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0318 21:22:58.811726   38073 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0318 21:22:58.811739   38073 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0318 21:22:58.811746   38073 command_runner.go:130] > [crio.stats]
	I0318 21:22:58.811755   38073 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0318 21:22:58.811765   38073 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0318 21:22:58.811774   38073 command_runner.go:130] > # stats_collection_period = 0
	I0318 21:22:58.812088   38073 command_runner.go:130] ! time="2024-03-18 21:22:58.770503705Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0318 21:22:58.812114   38073 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0318 21:22:58.812280   38073 cni.go:84] Creating CNI manager for ""
	I0318 21:22:58.812307   38073 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 21:22:58.812324   38073 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:22:58.812355   38073 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-119391 NodeName:multinode-119391 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:22:58.812495   38073 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-119391"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:22:58.812553   38073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:22:58.823059   38073 command_runner.go:130] > kubeadm
	I0318 21:22:58.823078   38073 command_runner.go:130] > kubectl
	I0318 21:22:58.823083   38073 command_runner.go:130] > kubelet
	I0318 21:22:58.823101   38073 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:22:58.823142   38073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:22:58.833104   38073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0318 21:22:58.851789   38073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:22:58.870357   38073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 21:22:58.888730   38073 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0318 21:22:58.892858   38073 command_runner.go:130] > 192.168.39.127	control-plane.minikube.internal
	I0318 21:22:58.893007   38073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:22:59.046371   38073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:22:59.062990   38073 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391 for IP: 192.168.39.127
	I0318 21:22:59.063003   38073 certs.go:194] generating shared ca certs ...
	I0318 21:22:59.063018   38073 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:22:59.063167   38073 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:22:59.063224   38073 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:22:59.063238   38073 certs.go:256] generating profile certs ...
	I0318 21:22:59.063343   38073 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/client.key
	I0318 21:22:59.063428   38073 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.key.385a54af
	I0318 21:22:59.063475   38073 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.key
	I0318 21:22:59.063489   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 21:22:59.063508   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 21:22:59.063524   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 21:22:59.063540   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 21:22:59.063554   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 21:22:59.063572   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 21:22:59.063590   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 21:22:59.063607   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 21:22:59.063674   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:22:59.063714   38073 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:22:59.063732   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:22:59.063774   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:22:59.063806   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:22:59.063835   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:22:59.063884   38073 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:22:59.063922   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.063941   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem -> /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.063963   38073 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.064853   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:22:59.092594   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:22:59.119477   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:22:59.150022   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:22:59.176097   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:22:59.202972   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:22:59.228806   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:22:59.255680   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/multinode-119391/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:22:59.282472   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:22:59.308911   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:22:59.334906   38073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:22:59.361021   38073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:22:59.378745   38073 ssh_runner.go:195] Run: openssl version
	I0318 21:22:59.384894   38073 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 21:22:59.385093   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:22:59.396854   38073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.401830   38073 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.401861   38073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.401899   38073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:22:59.407853   38073 command_runner.go:130] > b5213941
	I0318 21:22:59.407916   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:22:59.417952   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:22:59.429751   38073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.434655   38073 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.434717   38073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.434758   38073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:22:59.440892   38073 command_runner.go:130] > 51391683
	I0318 21:22:59.440962   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:22:59.450682   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:22:59.462070   38073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.467117   38073 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.467213   38073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.467247   38073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:22:59.473329   38073 command_runner.go:130] > 3ec20f2e
	I0318 21:22:59.473402   38073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:22:59.483377   38073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:22:59.488134   38073 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:22:59.488147   38073 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 21:22:59.488153   38073 command_runner.go:130] > Device: 253,1	Inode: 8385597     Links: 1
	I0318 21:22:59.488159   38073 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 21:22:59.488164   38073 command_runner.go:130] > Access: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488169   38073 command_runner.go:130] > Modify: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488174   38073 command_runner.go:130] > Change: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488181   38073 command_runner.go:130] >  Birth: 2024-03-18 21:16:37.818530090 +0000
	I0318 21:22:59.488477   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:22:59.494840   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.494878   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:22:59.500851   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.501014   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:22:59.506924   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.507175   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:22:59.513168   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.513231   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:22:59.519343   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.519408   38073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:22:59.525091   38073 command_runner.go:130] > Certificate will not expire
	I0318 21:22:59.525417   38073 kubeadm.go:391] StartCluster: {Name:multinode-119391 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-119391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.111 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:22:59.525529   38073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:22:59.525559   38073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:22:59.565504   38073 command_runner.go:130] > 0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb
	I0318 21:22:59.565532   38073 command_runner.go:130] > 398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11
	I0318 21:22:59.565546   38073 command_runner.go:130] > 96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207
	I0318 21:22:59.565557   38073 command_runner.go:130] > 5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14
	I0318 21:22:59.565565   38073 command_runner.go:130] > 43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902
	I0318 21:22:59.565577   38073 command_runner.go:130] > d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb
	I0318 21:22:59.565586   38073 command_runner.go:130] > e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb
	I0318 21:22:59.565602   38073 command_runner.go:130] > fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7
	I0318 21:22:59.565629   38073 cri.go:89] found id: "0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb"
	I0318 21:22:59.565638   38073 cri.go:89] found id: "398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11"
	I0318 21:22:59.565641   38073 cri.go:89] found id: "96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207"
	I0318 21:22:59.565645   38073 cri.go:89] found id: "5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14"
	I0318 21:22:59.565647   38073 cri.go:89] found id: "43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902"
	I0318 21:22:59.565650   38073 cri.go:89] found id: "d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb"
	I0318 21:22:59.565652   38073 cri.go:89] found id: "e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb"
	I0318 21:22:59.565655   38073 cri.go:89] found id: "fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7"
	I0318 21:22:59.565657   38073 cri.go:89] found id: ""
	I0318 21:22:59.565708   38073 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.415336704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797215415313235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=609d66de-62c8-4d8d-bffc-f18623eed4cb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.416034067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b59bcd2-b051-448f-a206-9cf7399eb078 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.416147073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b59bcd2-b051-448f-a206-9cf7399eb078 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.417077486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b59bcd2-b051-448f-a206-9cf7399eb078 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.475318758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b899f88-04e5-4c26-9aff-c384b1f5c02b name=/runtime.v1.RuntimeService/Version
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.475444014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b899f88-04e5-4c26-9aff-c384b1f5c02b name=/runtime.v1.RuntimeService/Version
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.477521521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=811259b6-870e-440d-8c5e-e610508e145d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.478729534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797215478704253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=811259b6-870e-440d-8c5e-e610508e145d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.479270236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b7c34b2-0e34-42b3-89fc-8c3973a0e2e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.479355818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b7c34b2-0e34-42b3-89fc-8c3973a0e2e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.479769483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b7c34b2-0e34-42b3-89fc-8c3973a0e2e6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.525507357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c84d24cb-664f-4e0b-94eb-29ff11e69abc name=/runtime.v1.RuntimeService/Version
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.525973138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c84d24cb-664f-4e0b-94eb-29ff11e69abc name=/runtime.v1.RuntimeService/Version
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.534058316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76e6349f-176f-482a-8127-288680d822b3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.534821539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797215534793262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76e6349f-176f-482a-8127-288680d822b3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.535645236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7addfbe-bcb1-4d02-8f3b-3c10fa4a3e6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.535701075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7addfbe-bcb1-4d02-8f3b-3c10fa4a3e6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.536062896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7addfbe-bcb1-4d02-8f3b-3c10fa4a3e6f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.589304381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=666c1d5b-4599-4683-be5c-29556e278711 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.589431479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=666c1d5b-4599-4683-be5c-29556e278711 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.590819133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b619924-f42b-48ef-929b-1a6eaf28cc86 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.591332173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797215591308068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b619924-f42b-48ef-929b-1a6eaf28cc86 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.592003925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b7b3063-f343-4ba5-8ae0-849870427b90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.592057680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b7b3063-f343-4ba5-8ae0-849870427b90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:26:55 multinode-119391 crio[2889]: time="2024-03-18 21:26:55.592381038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8b45b7562748e71d7d808e4760837ff01a8c1f098ba54384a33ba81d08d0689a,PodSandboxId:dc20f1507c314fdf90490ae09911da854ccb06ce0f041884724adad3ebdca9e7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710797020503255615,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae,PodSandboxId:547739c0e2721a14e8f6dcf08a3da0e30e2bfd2e4421a9d108eca0b6228527fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710796987052703061,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a,PodSandboxId:f0b5dedde71873493790fdc36483b567ef2c2652d79f92c562af8ba92dc870ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710796986894611487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec,PodSandboxId:2d1cf8bb01a0429c6658f518f57c5b440ddffabd8d6f706fd633839c6a2b94ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710796986752819093,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]
string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e6849218ee2611d2dd1312173e2b062323c80d096478d2eeb1611d3a70a324,PodSandboxId:64735673489b91d1e3d711b115f3219263e34c77f7db1b561f71b85496c47082,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710796986692929339,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a,PodSandboxId:b7fc4ed62592a84d4dcb5b5a10d26dc6069d4631eb44657e36b1b0123873571d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710796982091999554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c7403e982fd0b2e0f9e873df315329,},Annotations:map[string
]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3,PodSandboxId:442a4c7855fd57783f2520758e95ffe5caea8862b1bdecb999472c819dd8f51d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710796982096523507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de7732043fa0fb0828f2b8,},Annotations:map[string]string{io.kub
ernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184,PodSandboxId:ba57e976bf657ccb58a66cf0790ba1529454da501db50d3f8d04035217c188bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710796982008177435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11,PodSandboxId:ebd15a78f9d6c2abdeb81034e284e4761485d6f3f1f0b8638b5b157a51c8e503,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710796981983065728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:932861c3dfa0de87150b321d8d78af52ade713c36cc49b3bb0b4511e314ff68e,PodSandboxId:af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710796676091142865,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dr5bb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4c138ceb-99bf-4e93-a44b-e5feba8348a0,},Annotations:map[string]string{io.kubernetes.container.hash: 9df1d23,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0510b1eb0ef35bbe46afb185ed4fe7a96d5949b20c189fba72ac1fade2a694fb,PodSandboxId:01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710796628452003335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e37a8f5f-a4f2-46bc-b180-7bca46e587f9,},Annotations:map[string]string{io.kubernetes.container.hash: 64edc916,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11,PodSandboxId:47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710796628421392384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj892,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5685ec6-fd70-4637-a858-742004871377,},Annotations:map[string]string{io.kubernetes.container.hash: 8532269f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207,PodSandboxId:62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710796626618455559,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6zr7q,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 227a8900-d2de-4014-8d65-71e10e4da7ce,},Annotations:map[string]string{io.kubernetes.container.hash: f9d8f4cf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14,PodSandboxId:a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710796622628455059,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c9wgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4310f17f-f7dc-43c8-b39f-87b1169e801e,},Annotations:map[string]string{io.kubernetes.container.hash: d68433e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb,PodSandboxId:d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710796601879187192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
4976ceef730c00fb0e0a79a308bfcc6,},Annotations:map[string]string{io.kubernetes.container.hash: 794b106,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902,PodSandboxId:c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710796601890978485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bad22a9c0de77320
43fa0fb0828f2b8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb,PodSandboxId:acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710796601786623838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c74
03e982fd0b2e0f9e873df315329,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7,PodSandboxId:aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710796601773824534,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-119391,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 090796776b5603794e61ee5620edcec7,},Annotations:map
[string]string{io.kubernetes.container.hash: a5689f3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b7b3063-f343-4ba5-8ae0-849870427b90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8b45b7562748e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   dc20f1507c314       busybox-5b5d89c9d6-dr5bb
	97ab1bff4ddf1       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   547739c0e2721       kindnet-6zr7q
	c2ab6c1338fff       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   f0b5dedde7187       coredns-5dd5756b68-xj892
	1b711be87d96c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   2d1cf8bb01a04       kube-proxy-c9wgb
	e2e6849218ee2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   64735673489b9       storage-provisioner
	52147f9d7d0df       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   442a4c7855fd5       kube-scheduler-multinode-119391
	758c08c47f939       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   b7fc4ed62592a       kube-controller-manager-multinode-119391
	5579372217f2f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   ba57e976bf657       etcd-multinode-119391
	96a1cf12459b1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   ebd15a78f9d6c       kube-apiserver-multinode-119391
	932861c3dfa0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   af7475bf9389b       busybox-5b5d89c9d6-dr5bb
	0510b1eb0ef35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   01d9677ae8258       storage-provisioner
	398bf6b192733       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   47ba2c35bc6ad       coredns-5dd5756b68-xj892
	96ec94d755227       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   62625602ffb83       kindnet-6zr7q
	5c6e17a452796       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   a964aaa38e35f       kube-proxy-c9wgb
	43b05d04b29b4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   c6cf59e3b1331       kube-scheduler-multinode-119391
	d889df6742370       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   d52b4552b2062       kube-apiserver-multinode-119391
	e6fd37ada119d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   acffb6afe556b       kube-controller-manager-multinode-119391
	fb5aec4cb8dd3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   aa820b6a5ec03       etcd-multinode-119391
	
	
	==> coredns [398bf6b1927330187c312d996fe1052abbb2ad403de31749f87a6c589180dc11] <==
	[INFO] 10.244.1.2:36769 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726354s
	[INFO] 10.244.1.2:59505 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110534s
	[INFO] 10.244.1.2:43397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201607s
	[INFO] 10.244.1.2:59395 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00113884s
	[INFO] 10.244.1.2:47829 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201218s
	[INFO] 10.244.1.2:35041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111801s
	[INFO] 10.244.1.2:42015 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095353s
	[INFO] 10.244.0.3:44370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007968s
	[INFO] 10.244.0.3:45256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046964s
	[INFO] 10.244.0.3:36937 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089471s
	[INFO] 10.244.0.3:51478 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004386s
	[INFO] 10.244.1.2:35131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140111s
	[INFO] 10.244.1.2:49384 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135795s
	[INFO] 10.244.1.2:50850 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007096s
	[INFO] 10.244.1.2:37905 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075959s
	[INFO] 10.244.0.3:37500 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107301s
	[INFO] 10.244.0.3:53084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138644s
	[INFO] 10.244.0.3:37651 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009517s
	[INFO] 10.244.0.3:36490 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145484s
	[INFO] 10.244.1.2:55397 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151193s
	[INFO] 10.244.1.2:47870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131156s
	[INFO] 10.244.1.2:49477 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000179025s
	[INFO] 10.244.1.2:32943 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000210468s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c2ab6c1338fffa88df7617beefda7b385ed7fbf528025d075299e983af38fa3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45688 - 3851 "HINFO IN 1606542482993132714.4655986967184080250. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016265241s
	
	
	==> describe nodes <==
	Name:               multinode-119391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-119391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=multinode-119391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T21_16_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:16:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-119391
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:26:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:16:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:16:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:16:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:23:05 +0000   Mon, 18 Mar 2024 21:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-119391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ced609a92f1a46e48e0bce516406bccd
	  System UUID:                ced609a9-2f1a-46e4-8e0b-ce516406bccd
	  Boot ID:                    9d164ddd-7fc2-478d-af34-eedda433089a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dr5bb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 coredns-5dd5756b68-xj892                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m55s
	  kube-system                 etcd-multinode-119391                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-6zr7q                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m55s
	  kube-system                 kube-apiserver-multinode-119391             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-119391    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-c9wgb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	  kube-system                 kube-scheduler-multinode-119391             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-119391 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-119391 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-119391 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m56s                  node-controller  Node multinode-119391 event: Registered Node multinode-119391 in Controller
	  Normal  NodeReady                9m48s                  kubelet          Node multinode-119391 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node multinode-119391 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node multinode-119391 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node multinode-119391 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m37s                  node-controller  Node multinode-119391 event: Registered Node multinode-119391 in Controller
	
	
	Name:               multinode-119391-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-119391-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=multinode-119391
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T21_23_49_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:23:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-119391-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:24:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 21:24:19 +0000   Mon, 18 Mar 2024 21:25:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.159
	  Hostname:    multinode-119391-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b79ff4b3ad8948948e81374df92dd3d1
	  System UUID:                b79ff4b3-ad89-4894-8e81-374df92dd3d1
	  Boot ID:                    e5004a05-e979-4cd9-842c-90a668098a75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zxfmj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kindnet-hb4lj               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m15s
	  kube-system                 kube-proxy-n5fr8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  Starting                 3m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m15s (x5 over 9m17s)  kubelet          Node multinode-119391-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s (x5 over 9m17s)  kubelet          Node multinode-119391-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x5 over 9m17s)  kubelet          Node multinode-119391-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m6s                   kubelet          Node multinode-119391-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s (x5 over 3m8s)    kubelet          Node multinode-119391-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x5 over 3m8s)    kubelet          Node multinode-119391-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x5 over 3m8s)    kubelet          Node multinode-119391-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m59s                  kubelet          Node multinode-119391-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-119391-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.175632] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.148042] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.296231] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +5.292192] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.064543] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.635260] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[  +0.569576] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.194211] systemd-fstab-generator[1293]: Ignoring "noauto" option for root device
	[  +0.090245] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.248848] systemd-fstab-generator[1487]: Ignoring "noauto" option for root device
	[  +0.110889] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 21:17] kauditd_printk_skb: 51 callbacks suppressed
	[ +46.797001] kauditd_printk_skb: 21 callbacks suppressed
	[Mar18 21:22] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.171494] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.171950] systemd-fstab-generator[2833]: Ignoring "noauto" option for root device
	[  +0.140102] systemd-fstab-generator[2845]: Ignoring "noauto" option for root device
	[  +0.305594] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +2.231356] systemd-fstab-generator[2975]: Ignoring "noauto" option for root device
	[  +1.919428] systemd-fstab-generator[3099]: Ignoring "noauto" option for root device
	[Mar18 21:23] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.016186] kauditd_printk_skb: 55 callbacks suppressed
	[ +12.085266] kauditd_printk_skb: 7 callbacks suppressed
	[  +2.246564] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[ +19.191078] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [5579372217f2f08e6aa93a0036044a00bf76ae651afaf125bf0030fbf707a184] <==
	{"level":"info","ts":"2024-03-18T21:23:02.597366Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T21:23:02.597378Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T21:23:02.597765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)"}
	{"level":"info","ts":"2024-03-18T21:23:02.597857Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","added-peer-id":"9dc5e8b969e9632c","added-peer-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-03-18T21:23:02.598004Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:23:02.598031Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:23:02.612272Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T21:23:02.612462Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9dc5e8b969e9632c","initial-advertise-peer-urls":["https://192.168.39.127:2380"],"listen-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.127:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T21:23:02.612527Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T21:23:02.612696Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:23:02.612727Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:23:04.237201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T21:23:04.237264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T21:23:04.237348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgPreVoteResp from 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-03-18T21:23:04.237365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.237412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.237424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.237432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-03-18T21:23:04.246766Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:multinode-119391 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T21:23:04.24691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:23:04.247186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:23:04.248473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	{"level":"info","ts":"2024-03-18T21:23:04.248637Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T21:23:04.248864Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T21:23:04.2489Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [fb5aec4cb8dd35cfb65402c6855d2fe019ca89f9412841fac70bfb86e03153f7] <==
	{"level":"info","ts":"2024-03-18T21:16:43.004101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 2"}
	{"level":"info","ts":"2024-03-18T21:16:43.004109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-03-18T21:16:43.005439Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:multinode-119391 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T21:16:43.005654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:16:43.006486Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	{"level":"info","ts":"2024-03-18T21:16:43.006686Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:16:43.006828Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:16:43.007742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T21:16:43.007968Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T21:16:43.008012Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T21:16:43.016774Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:16:43.016875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:16:43.01692Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-03-18T21:18:27.198539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.863106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-03-18T21:18:27.199313Z","caller":"traceutil/trace.go:171","msg":"trace[188100761] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:575; }","duration":"133.754056ms","start":"2024-03-18T21:18:27.065505Z","end":"2024-03-18T21:18:27.199259Z","steps":["trace[188100761] 'range keys from in-memory index tree'  (duration: 132.623314ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T21:21:24.539545Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T21:21:24.539866Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-119391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	{"level":"warn","ts":"2024-03-18T21:21:24.540078Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T21:21:24.540221Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T21:21:24.630637Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T21:21:24.630863Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T21:21:24.630974Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9dc5e8b969e9632c","current-leader-member-id":"9dc5e8b969e9632c"}
	{"level":"info","ts":"2024-03-18T21:21:24.63363Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:21:24.633762Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-03-18T21:21:24.633801Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-119391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	
	
	==> kernel <==
	 21:26:56 up 10 min,  0 users,  load average: 0.16, 0.19, 0.11
	Linux multinode-119391 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [96ec94d7552274cd79bbb3c49ba5fe01e1236594dd862a0867c24c935cf83207] <==
	I0318 21:20:37.729983       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:20:47.736363       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:20:47.736385       1 main.go:227] handling current node
	I0318 21:20:47.736394       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:20:47.736405       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:20:47.736708       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:20:47.736725       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:20:57.741541       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:20:57.741716       1 main.go:227] handling current node
	I0318 21:20:57.741740       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:20:57.741758       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:20:57.741887       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:20:57.741908       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:21:07.753614       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:21:07.753754       1 main.go:227] handling current node
	I0318 21:21:07.753787       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:21:07.753807       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:21:07.753979       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:21:07.754117       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	I0318 21:21:17.765254       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:21:17.765304       1 main.go:227] handling current node
	I0318 21:21:17.765314       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:21:17.765320       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:21:17.765437       1 main.go:223] Handling node with IPs: map[192.168.39.111:{}]
	I0318 21:21:17.765472       1 main.go:250] Node multinode-119391-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [97ab1bff4ddf196db69f9333ba999d6d87e1610badcdf65248df14adb47e95ae] <==
	I0318 21:25:48.082430       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:25:58.096151       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:25:58.096203       1 main.go:227] handling current node
	I0318 21:25:58.096220       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:25:58.096226       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:26:08.101302       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:26:08.101424       1 main.go:227] handling current node
	I0318 21:26:08.101452       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:26:08.101481       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:26:18.115128       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:26:18.115184       1 main.go:227] handling current node
	I0318 21:26:18.115195       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:26:18.115201       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:26:28.120058       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:26:28.120110       1 main.go:227] handling current node
	I0318 21:26:28.120127       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:26:28.120134       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:26:38.125648       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:26:38.125709       1 main.go:227] handling current node
	I0318 21:26:38.125732       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:26:38.125738       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	I0318 21:26:48.139788       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0318 21:26:48.139833       1 main.go:227] handling current node
	I0318 21:26:48.139845       1 main.go:223] Handling node with IPs: map[192.168.39.159:{}]
	I0318 21:26:48.139851       1 main.go:250] Node multinode-119391-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [96a1cf12459b15ea476511ad3305c909fac139a5bb7cb00a07bbfe98366fad11] <==
	I0318 21:23:05.723081       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 21:23:05.723117       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 21:23:05.723151       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 21:23:05.785734       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 21:23:05.786006       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 21:23:05.834997       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 21:23:05.836219       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 21:23:05.836260       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 21:23:05.838344       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 21:23:05.839030       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 21:23:05.839204       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 21:23:05.839246       1 aggregator.go:166] initial CRD sync complete...
	I0318 21:23:05.839265       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 21:23:05.839270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 21:23:05.839274       1 cache.go:39] Caches are synced for autoregister controller
	I0318 21:23:05.855467       1 shared_informer.go:318] Caches are synced for configmaps
	E0318 21:23:05.867500       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 21:23:06.648880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 21:23:08.544850       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 21:23:08.691041       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 21:23:08.706847       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 21:23:08.785241       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 21:23:08.792493       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 21:23:18.800884       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 21:23:18.950008       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d889df6742370510869c6ce9033f732d7d8e6629c12bf9299cb86c097ff861bb] <==
	W0318 21:21:24.572377       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.572445       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.572502       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.572687       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0318 21:21:24.573203       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573321       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573512       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573676       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 21:21:24.573854       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0318 21:21:24.574522       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574669       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574696       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574736       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574797       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574826       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574883       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574914       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.574982       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575040       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575099       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575159       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575222       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575301       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575388       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 21:21:24.575486       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [758c08c47f9392e3aea47a62f85cc9ce64c53db27c76ee22d4a7e05f6151b59a] <==
	I0318 21:23:56.033606       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:23:56.056438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.75µs"
	I0318 21:23:56.066718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.646µs"
	I0318 21:23:58.810468       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-zxfmj" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-zxfmj"
	I0318 21:24:00.234792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.506873ms"
	I0318 21:24:00.236440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.847µs"
	I0318 21:24:16.367925       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:24:18.813079       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-119391-m03 event: Removing Node multinode-119391-m03 from Controller"
	I0318 21:24:18.903532       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m03\" does not exist"
	I0318 21:24:18.903793       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:24:18.917492       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m03" podCIDRs=["10.244.2.0/24"]
	I0318 21:24:23.813833       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller"
	I0318 21:24:28.606366       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m03"
	I0318 21:24:34.418317       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:24:38.837542       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-119391-m03 event: Removing Node multinode-119391-m03 from Controller"
	I0318 21:24:58.715876       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-hhjx2"
	I0318 21:24:58.755370       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-hhjx2"
	I0318 21:24:58.755759       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-9df9r"
	I0318 21:24:58.785189       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-9df9r"
	I0318 21:25:13.856326       1 event.go:307] "Event occurred" object="multinode-119391-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-119391-m02 status is now: NodeNotReady"
	I0318 21:25:13.870874       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-n5fr8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:25:13.881762       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-zxfmj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:25:13.895482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.586858ms"
	I0318 21:25:13.895637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="99.19µs"
	I0318 21:25:13.906660       1 event.go:307] "Event occurred" object="kube-system/kindnet-hb4lj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [e6fd37ada119d0b604be39e7441ca49f5b496d59b0a82d897267665270c9bebb] <==
	I0318 21:18:28.490666       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m03\" does not exist"
	I0318 21:18:28.491261       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:18:28.516196       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m03" podCIDRs=["10.244.2.0/24"]
	I0318 21:18:28.528530       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9df9r"
	I0318 21:18:28.528635       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhjx2"
	I0318 21:18:29.584293       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-119391-m03"
	I0318 21:18:29.584379       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller"
	I0318 21:18:37.324663       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:19:08.280793       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:19:09.606820       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-119391-m03 event: Removing Node multinode-119391-m03 from Controller"
	I0318 21:19:10.733700       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-119391-m03\" does not exist"
	I0318 21:19:10.737197       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:19:10.750020       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-119391-m03" podCIDRs=["10.244.3.0/24"]
	I0318 21:19:14.607698       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-119391-m03 event: Registered Node multinode-119391-m03 in Controller"
	I0318 21:19:18.049644       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:20:04.640417       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-119391-m02"
	I0318 21:20:04.641516       1 event.go:307] "Event occurred" object="multinode-119391-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-119391-m03 status is now: NodeNotReady"
	I0318 21:20:04.646708       1 event.go:307] "Event occurred" object="multinode-119391-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-119391-m02 status is now: NodeNotReady"
	I0318 21:20:04.656300       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-9df9r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.661036       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-w6n2g" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.676326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.239044ms"
	I0318 21:20:04.682225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.409µs"
	I0318 21:20:04.682026       1 event.go:307] "Event occurred" object="kube-system/kindnet-hhjx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.683976       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-n5fr8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 21:20:04.696783       1 event.go:307] "Event occurred" object="kube-system/kindnet-hb4lj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [1b711be87d96c7b5b75cad3529e13aa133c0dc4a0a1433854ec29525c4b13aec] <==
	I0318 21:23:07.146266       1 server_others.go:69] "Using iptables proxy"
	I0318 21:23:07.161482       1 node.go:141] Successfully retrieved node IP: 192.168.39.127
	I0318 21:23:07.263647       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 21:23:07.263703       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:23:07.269502       1 server_others.go:152] "Using iptables Proxier"
	I0318 21:23:07.269671       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:23:07.269996       1 server.go:846] "Version info" version="v1.28.4"
	I0318 21:23:07.270033       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:23:07.272037       1 config.go:188] "Starting service config controller"
	I0318 21:23:07.272081       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:23:07.272105       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:23:07.272108       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:23:07.272474       1 config.go:315] "Starting node config controller"
	I0318 21:23:07.272515       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:23:07.373064       1 shared_informer.go:318] Caches are synced for node config
	I0318 21:23:07.373113       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:23:07.373136       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5c6e17a45279644966823a550892537f40fa242936a2cf0302bafc35b900cc14] <==
	I0318 21:17:02.937304       1 server_others.go:69] "Using iptables proxy"
	I0318 21:17:02.954712       1 node.go:141] Successfully retrieved node IP: 192.168.39.127
	I0318 21:17:03.004960       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 21:17:03.005002       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:17:03.008032       1 server_others.go:152] "Using iptables Proxier"
	I0318 21:17:03.008849       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:17:03.009139       1 server.go:846] "Version info" version="v1.28.4"
	I0318 21:17:03.009174       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:17:03.011427       1 config.go:188] "Starting service config controller"
	I0318 21:17:03.011856       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:17:03.011915       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:17:03.011921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:17:03.014133       1 config.go:315] "Starting node config controller"
	I0318 21:17:03.014170       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:17:03.112295       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 21:17:03.112356       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:17:03.114381       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [43b05d04b29b4f17d739d17448b060bf81e99439a66f6ddb4bcfa949a2a32902] <==
	W0318 21:16:44.613751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 21:16:44.614250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 21:16:44.613800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 21:16:44.614265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 21:16:44.613397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 21:16:44.614277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 21:16:45.495850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 21:16:45.495997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 21:16:45.564811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 21:16:45.564860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 21:16:45.618170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 21:16:45.620618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 21:16:45.629294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 21:16:45.629345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 21:16:45.774029       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 21:16:45.774148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 21:16:45.847506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 21:16:45.847999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 21:16:46.105929       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 21:16:46.107144       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 21:16:48.404313       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:21:24.534598       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 21:21:24.534775       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 21:21:24.535166       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0318 21:21:24.549784       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [52147f9d7d0df833fafd6461dc5b8098efaceaa5d8ba8a28d192f58aacf562a3] <==
	I0318 21:23:02.807247       1 serving.go:348] Generated self-signed cert in-memory
	W0318 21:23:05.747953       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 21:23:05.748009       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 21:23:05.748021       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 21:23:05.748027       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 21:23:05.797060       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 21:23:05.797109       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:23:05.804302       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 21:23:05.804462       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 21:23:05.804519       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:23:05.804544       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 21:23:05.904667       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 21:25:01 multinode-119391 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:25:01 multinode-119391 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.232778    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4c138ceb-99bf-4e93-a44b-e5feba8348a0/crio-af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d: Error finding container af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d: Status 404 returned error can't find the container with id af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.233221    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4310f17f-f7dc-43c8-b39f-87b1169e801e/crio-a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa: Error finding container a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa: Status 404 returned error can't find the container with id a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.233537    3106 manager.go:1106] Failed to create existing container: /kubepods/pod227a8900-d2de-4014-8d65-71e10e4da7ce/crio-62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f: Error finding container 62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f: Status 404 returned error can't find the container with id 62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.233896    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod2bad22a9c0de7732043fa0fb0828f2b8/crio-c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864: Error finding container c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864: Status 404 returned error can't find the container with id c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.234222    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod090796776b5603794e61ee5620edcec7/crio-aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c: Error finding container aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c: Status 404 returned error can't find the container with id aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.234497    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda5685ec6-fd70-4637-a858-742004871377/crio-47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9: Error finding container 47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9: Status 404 returned error can't find the container with id 47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.234887    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode37a8f5f-a4f2-46bc-b180-7bca46e587f9/crio-01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a: Error finding container 01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a: Status 404 returned error can't find the container with id 01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.235208    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf4976ceef730c00fb0e0a79a308bfcc6/crio-d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e: Error finding container d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e: Status 404 returned error can't find the container with id d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e
	Mar 18 21:25:01 multinode-119391 kubelet[3106]: E0318 21:25:01.235480    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod35c7403e982fd0b2e0f9e873df315329/crio-acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63: Error finding container acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63: Status 404 returned error can't find the container with id acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.164665    3106 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 21:26:01 multinode-119391 kubelet[3106]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 21:26:01 multinode-119391 kubelet[3106]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 21:26:01 multinode-119391 kubelet[3106]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 21:26:01 multinode-119391 kubelet[3106]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.233744    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf4976ceef730c00fb0e0a79a308bfcc6/crio-d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e: Error finding container d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e: Status 404 returned error can't find the container with id d52b4552b20625c43e9cb485dae37221526a5a6fdbda96c1c9c211f03b207a4e
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.234120    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod35c7403e982fd0b2e0f9e873df315329/crio-acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63: Error finding container acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63: Status 404 returned error can't find the container with id acffb6afe556b6d12455b084fc2fa8be9b6bcc8f897919e737e6c467cde3ff63
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.234499    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode37a8f5f-a4f2-46bc-b180-7bca46e587f9/crio-01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a: Error finding container 01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a: Status 404 returned error can't find the container with id 01d9677ae8258f5c2ea36acff9bf78f2f304f5ecd9bb64756313fb22086be96a
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.234939    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/poda5685ec6-fd70-4637-a858-742004871377/crio-47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9: Error finding container 47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9: Status 404 returned error can't find the container with id 47ba2c35bc6ade632f089c8c100d1d29646d57ec57b21feb33838d6a5173c0b9
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.235235    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod2bad22a9c0de7732043fa0fb0828f2b8/crio-c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864: Error finding container c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864: Status 404 returned error can't find the container with id c6cf59e3b1331d37bebe42d3803aba94b92aa2e05edff3c3e42cb1c41fd08864
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.235770    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4c138ceb-99bf-4e93-a44b-e5feba8348a0/crio-af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d: Error finding container af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d: Status 404 returned error can't find the container with id af7475bf9389bc22f7f9ee23ff50708fd16a14c17d7a93442e9837eb6c24ea4d
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.236063    3106 manager.go:1106] Failed to create existing container: /kubepods/pod227a8900-d2de-4014-8d65-71e10e4da7ce/crio-62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f: Error finding container 62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f: Status 404 returned error can't find the container with id 62625602ffb83b33581bb4a8d51a2ca9f3ae93fb08c611857a02b9553577530f
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.236384    3106 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod090796776b5603794e61ee5620edcec7/crio-aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c: Error finding container aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c: Status 404 returned error can't find the container with id aa820b6a5ec03756b20399c0accb1be7cb6505903289cf53572b93fa0ea88f4c
	Mar 18 21:26:01 multinode-119391 kubelet[3106]: E0318 21:26:01.236801    3106 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod4310f17f-f7dc-43c8-b39f-87b1169e801e/crio-a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa: Error finding container a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa: Status 404 returned error can't find the container with id a964aaa38e35fbf9ac6b9d85bfa93173fceb3f3943c03c06ec4071a3a1a231aa
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:26:55.118182   39533 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18421-5321/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-119391 -n multinode-119391
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-119391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.49s)

                                                
                                    
x
+
TestPreload (250.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-524155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-524155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m47.933358664s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-524155 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-524155 image pull gcr.io/k8s-minikube/busybox: (2.980243366s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-524155
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-524155: (7.615113832s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-524155 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-524155 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.950408039s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-524155 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-03-18 21:34:48.893911983 +0000 UTC m=+3945.795700423
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-524155 -n test-preload-524155
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-524155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-524155 logs -n 25: (1.131863195s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391 sudo cat                                       | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m03_multinode-119391.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt                       | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m02:/home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n                                                                 | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | multinode-119391-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-119391 ssh -n multinode-119391-m02 sudo cat                                   | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	|         | /home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-119391 node stop m03                                                          | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:18 UTC |
	| node    | multinode-119391 node start                                                             | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:18 UTC | 18 Mar 24 21:19 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:19 UTC |                     |
	| stop    | -p multinode-119391                                                                     | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:19 UTC |                     |
	| start   | -p multinode-119391                                                                     | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:21 UTC | 18 Mar 24 21:24 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC |                     |
	| node    | multinode-119391 node delete                                                            | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC | 18 Mar 24 21:24 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-119391 stop                                                                   | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:24 UTC |                     |
	| start   | -p multinode-119391                                                                     | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:26 UTC | 18 Mar 24 21:29 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-119391                                                                | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:29 UTC |                     |
	| start   | -p multinode-119391-m02                                                                 | multinode-119391-m02 | jenkins | v1.32.0 | 18 Mar 24 21:29 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-119391-m03                                                                 | multinode-119391-m03 | jenkins | v1.32.0 | 18 Mar 24 21:29 UTC | 18 Mar 24 21:30 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-119391                                                                 | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:30 UTC |                     |
	| delete  | -p multinode-119391-m03                                                                 | multinode-119391-m03 | jenkins | v1.32.0 | 18 Mar 24 21:30 UTC | 18 Mar 24 21:30 UTC |
	| delete  | -p multinode-119391                                                                     | multinode-119391     | jenkins | v1.32.0 | 18 Mar 24 21:30 UTC | 18 Mar 24 21:30 UTC |
	| start   | -p test-preload-524155                                                                  | test-preload-524155  | jenkins | v1.32.0 | 18 Mar 24 21:30 UTC | 18 Mar 24 21:33 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-524155 image pull                                                          | test-preload-524155  | jenkins | v1.32.0 | 18 Mar 24 21:33 UTC | 18 Mar 24 21:33 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-524155                                                                  | test-preload-524155  | jenkins | v1.32.0 | 18 Mar 24 21:33 UTC | 18 Mar 24 21:33 UTC |
	| start   | -p test-preload-524155                                                                  | test-preload-524155  | jenkins | v1.32.0 | 18 Mar 24 21:33 UTC | 18 Mar 24 21:34 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-524155 image list                                                          | test-preload-524155  | jenkins | v1.32.0 | 18 Mar 24 21:34 UTC | 18 Mar 24 21:34 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:33:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:33:39.766902   41726 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:33:39.767121   41726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:33:39.767128   41726 out.go:304] Setting ErrFile to fd 2...
	I0318 21:33:39.767133   41726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:33:39.767276   41726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:33:39.767749   41726 out.go:298] Setting JSON to false
	I0318 21:33:39.768554   41726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4564,"bootTime":1710793056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:33:39.768605   41726 start.go:139] virtualization: kvm guest
	I0318 21:33:39.770802   41726 out.go:177] * [test-preload-524155] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:33:39.772169   41726 notify.go:220] Checking for updates...
	I0318 21:33:39.772179   41726 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:33:39.773537   41726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:33:39.774798   41726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:33:39.776046   41726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:33:39.777249   41726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:33:39.778573   41726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:33:39.780149   41726 config.go:182] Loaded profile config "test-preload-524155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0318 21:33:39.780521   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:33:39.780554   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:33:39.794511   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I0318 21:33:39.794926   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:33:39.795409   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:33:39.795429   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:33:39.795777   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:33:39.795916   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:33:39.797689   41726 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 21:33:39.798822   41726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:33:39.799111   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:33:39.799150   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:33:39.813286   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
	I0318 21:33:39.813679   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:33:39.814145   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:33:39.814165   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:33:39.814451   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:33:39.814646   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:33:39.847064   41726 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:33:39.848284   41726 start.go:297] selected driver: kvm2
	I0318 21:33:39.848297   41726 start.go:901] validating driver "kvm2" against &{Name:test-preload-524155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-524155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:33:39.848413   41726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:33:39.849357   41726 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:33:39.849442   41726 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:33:39.863335   41726 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:33:39.863707   41726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:33:39.863780   41726 cni.go:84] Creating CNI manager for ""
	I0318 21:33:39.863799   41726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:33:39.863863   41726 start.go:340] cluster config:
	{Name:test-preload-524155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-524155 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:33:39.864000   41726 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:33:39.865697   41726 out.go:177] * Starting "test-preload-524155" primary control-plane node in "test-preload-524155" cluster
	I0318 21:33:39.866825   41726 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0318 21:33:40.423285   41726 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0318 21:33:40.423351   41726 cache.go:56] Caching tarball of preloaded images
	I0318 21:33:40.423580   41726 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0318 21:33:40.425633   41726 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0318 21:33:40.426948   41726 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 21:33:40.534612   41726 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0318 21:33:52.939401   41726 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 21:33:52.939479   41726 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 21:33:53.771997   41726 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0318 21:33:53.772116   41726 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/config.json ...
	I0318 21:33:53.772321   41726 start.go:360] acquireMachinesLock for test-preload-524155: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:33:53.772377   41726 start.go:364] duration metric: took 38.199µs to acquireMachinesLock for "test-preload-524155"
	I0318 21:33:53.772391   41726 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:33:53.772396   41726 fix.go:54] fixHost starting: 
	I0318 21:33:53.772684   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:33:53.772724   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:33:53.786576   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0318 21:33:53.786990   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:33:53.787389   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:33:53.787408   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:33:53.787715   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:33:53.787867   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:33:53.788010   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetState
	I0318 21:33:53.789472   41726 fix.go:112] recreateIfNeeded on test-preload-524155: state=Stopped err=<nil>
	I0318 21:33:53.789496   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	W0318 21:33:53.789638   41726 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:33:53.791706   41726 out.go:177] * Restarting existing kvm2 VM for "test-preload-524155" ...
	I0318 21:33:53.793073   41726 main.go:141] libmachine: (test-preload-524155) Calling .Start
	I0318 21:33:53.793197   41726 main.go:141] libmachine: (test-preload-524155) Ensuring networks are active...
	I0318 21:33:53.793919   41726 main.go:141] libmachine: (test-preload-524155) Ensuring network default is active
	I0318 21:33:53.794183   41726 main.go:141] libmachine: (test-preload-524155) Ensuring network mk-test-preload-524155 is active
	I0318 21:33:53.794525   41726 main.go:141] libmachine: (test-preload-524155) Getting domain xml...
	I0318 21:33:53.795195   41726 main.go:141] libmachine: (test-preload-524155) Creating domain...
	I0318 21:33:54.943259   41726 main.go:141] libmachine: (test-preload-524155) Waiting to get IP...
	I0318 21:33:54.944071   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:54.944397   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:54.944444   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:54.944369   41795 retry.go:31] will retry after 292.729708ms: waiting for machine to come up
	I0318 21:33:55.238899   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:55.239284   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:55.239312   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:55.239234   41795 retry.go:31] will retry after 269.009822ms: waiting for machine to come up
	I0318 21:33:55.509886   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:55.510261   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:55.510283   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:55.510212   41795 retry.go:31] will retry after 407.039827ms: waiting for machine to come up
	I0318 21:33:55.918488   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:55.918905   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:55.918932   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:55.918863   41795 retry.go:31] will retry after 374.432298ms: waiting for machine to come up
	I0318 21:33:56.295105   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:56.295445   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:56.295476   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:56.295393   41795 retry.go:31] will retry after 587.687159ms: waiting for machine to come up
	I0318 21:33:56.885225   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:56.885662   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:56.885695   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:56.885624   41795 retry.go:31] will retry after 942.48703ms: waiting for machine to come up
	I0318 21:33:57.829468   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:57.829836   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:57.829868   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:57.829774   41795 retry.go:31] will retry after 990.684732ms: waiting for machine to come up
	I0318 21:33:58.822440   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:58.822829   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:58.822850   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:58.822798   41795 retry.go:31] will retry after 932.074056ms: waiting for machine to come up
	I0318 21:33:59.756777   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:33:59.757222   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:33:59.757251   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:33:59.757171   41795 retry.go:31] will retry after 1.730941769s: waiting for machine to come up
	I0318 21:34:01.489872   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:01.490265   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:34:01.490299   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:34:01.490258   41795 retry.go:31] will retry after 1.660698136s: waiting for machine to come up
	I0318 21:34:03.151984   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:03.152389   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:34:03.152411   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:34:03.152333   41795 retry.go:31] will retry after 2.890707081s: waiting for machine to come up
	I0318 21:34:06.045684   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:06.046075   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:34:06.046095   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:34:06.046032   41795 retry.go:31] will retry after 2.535654827s: waiting for machine to come up
	I0318 21:34:08.584641   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:08.585078   41726 main.go:141] libmachine: (test-preload-524155) DBG | unable to find current IP address of domain test-preload-524155 in network mk-test-preload-524155
	I0318 21:34:08.585102   41726 main.go:141] libmachine: (test-preload-524155) DBG | I0318 21:34:08.585036   41795 retry.go:31] will retry after 3.681985214s: waiting for machine to come up
	I0318 21:34:12.270941   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.271288   41726 main.go:141] libmachine: (test-preload-524155) Found IP for machine: 192.168.39.10
	I0318 21:34:12.271313   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has current primary IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.271321   41726 main.go:141] libmachine: (test-preload-524155) Reserving static IP address...
	I0318 21:34:12.271726   41726 main.go:141] libmachine: (test-preload-524155) Reserved static IP address: 192.168.39.10
	I0318 21:34:12.271752   41726 main.go:141] libmachine: (test-preload-524155) Waiting for SSH to be available...
	I0318 21:34:12.271772   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "test-preload-524155", mac: "52:54:00:4e:dc:08", ip: "192.168.39.10"} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.271799   41726 main.go:141] libmachine: (test-preload-524155) DBG | skip adding static IP to network mk-test-preload-524155 - found existing host DHCP lease matching {name: "test-preload-524155", mac: "52:54:00:4e:dc:08", ip: "192.168.39.10"}
	I0318 21:34:12.271811   41726 main.go:141] libmachine: (test-preload-524155) DBG | Getting to WaitForSSH function...
	I0318 21:34:12.273740   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.274060   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.274089   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.274200   41726 main.go:141] libmachine: (test-preload-524155) DBG | Using SSH client type: external
	I0318 21:34:12.274228   41726 main.go:141] libmachine: (test-preload-524155) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa (-rw-------)
	I0318 21:34:12.274253   41726 main.go:141] libmachine: (test-preload-524155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:34:12.274273   41726 main.go:141] libmachine: (test-preload-524155) DBG | About to run SSH command:
	I0318 21:34:12.274289   41726 main.go:141] libmachine: (test-preload-524155) DBG | exit 0
	I0318 21:34:12.396820   41726 main.go:141] libmachine: (test-preload-524155) DBG | SSH cmd err, output: <nil>: 
	I0318 21:34:12.397135   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetConfigRaw
	I0318 21:34:12.397780   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetIP
	I0318 21:34:12.400225   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.400538   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.400571   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.400810   41726 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/config.json ...
	I0318 21:34:12.401028   41726 machine.go:94] provisionDockerMachine start ...
	I0318 21:34:12.401048   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:12.401231   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:12.403704   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.404059   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.404088   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.404219   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:12.404372   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.404547   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.404690   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:12.404842   41726 main.go:141] libmachine: Using SSH client type: native
	I0318 21:34:12.405030   41726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0318 21:34:12.405043   41726 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:34:12.513581   41726 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:34:12.513612   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetMachineName
	I0318 21:34:12.513853   41726 buildroot.go:166] provisioning hostname "test-preload-524155"
	I0318 21:34:12.513878   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetMachineName
	I0318 21:34:12.514087   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:12.516475   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.516747   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.516773   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.516985   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:12.517160   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.517357   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.517526   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:12.517672   41726 main.go:141] libmachine: Using SSH client type: native
	I0318 21:34:12.517874   41726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0318 21:34:12.517891   41726 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-524155 && echo "test-preload-524155" | sudo tee /etc/hostname
	I0318 21:34:12.640769   41726 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-524155
	
	I0318 21:34:12.640798   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:12.643281   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.643618   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.643647   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.643777   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:12.643955   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.644111   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.644235   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:12.644392   41726 main.go:141] libmachine: Using SSH client type: native
	I0318 21:34:12.644587   41726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0318 21:34:12.644610   41726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-524155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-524155/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-524155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:34:12.758972   41726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:34:12.759013   41726 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:34:12.759034   41726 buildroot.go:174] setting up certificates
	I0318 21:34:12.759043   41726 provision.go:84] configureAuth start
	I0318 21:34:12.759051   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetMachineName
	I0318 21:34:12.759310   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetIP
	I0318 21:34:12.761801   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.762085   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.762110   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.762246   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:12.764168   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.764411   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.764438   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.764547   41726 provision.go:143] copyHostCerts
	I0318 21:34:12.764600   41726 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:34:12.764610   41726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:34:12.764675   41726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:34:12.764789   41726 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:34:12.764798   41726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:34:12.764822   41726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:34:12.764969   41726 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:34:12.764986   41726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:34:12.765042   41726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:34:12.765122   41726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.test-preload-524155 san=[127.0.0.1 192.168.39.10 localhost minikube test-preload-524155]
	I0318 21:34:12.959363   41726 provision.go:177] copyRemoteCerts
	I0318 21:34:12.959417   41726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:34:12.959440   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:12.961942   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.962291   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:12.962324   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:12.962521   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:12.962708   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:12.962910   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:12.963061   41726 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa Username:docker}
	I0318 21:34:13.046936   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:34:13.075103   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 21:34:13.100776   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:34:13.126101   41726 provision.go:87] duration metric: took 367.047514ms to configureAuth
	I0318 21:34:13.126127   41726 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:34:13.126273   41726 config.go:182] Loaded profile config "test-preload-524155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0318 21:34:13.126347   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:13.128714   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.129023   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:13.129050   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.129265   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:13.129428   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.129603   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.129704   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:13.129856   41726 main.go:141] libmachine: Using SSH client type: native
	I0318 21:34:13.130026   41726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0318 21:34:13.130047   41726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:34:13.411772   41726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:34:13.411798   41726 machine.go:97] duration metric: took 1.010755625s to provisionDockerMachine
	I0318 21:34:13.411811   41726 start.go:293] postStartSetup for "test-preload-524155" (driver="kvm2")
	I0318 21:34:13.411825   41726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:34:13.411851   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:13.412154   41726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:34:13.412181   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:13.414654   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.414954   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:13.414983   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.415100   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:13.415275   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.415424   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:13.415560   41726 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa Username:docker}
	I0318 21:34:13.499817   41726 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:34:13.504355   41726 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:34:13.504375   41726 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:34:13.504438   41726 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:34:13.504524   41726 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:34:13.504635   41726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:34:13.514305   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:34:13.541784   41726 start.go:296] duration metric: took 129.961473ms for postStartSetup
	I0318 21:34:13.541813   41726 fix.go:56] duration metric: took 19.769417077s for fixHost
	I0318 21:34:13.541831   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:13.544300   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.544657   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:13.544684   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.544825   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:13.545032   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.545184   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.545327   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:13.545466   41726 main.go:141] libmachine: Using SSH client type: native
	I0318 21:34:13.545662   41726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0318 21:34:13.545675   41726 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:34:13.653894   41726 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710797653.624013558
	
	I0318 21:34:13.653916   41726 fix.go:216] guest clock: 1710797653.624013558
	I0318 21:34:13.653925   41726 fix.go:229] Guest: 2024-03-18 21:34:13.624013558 +0000 UTC Remote: 2024-03-18 21:34:13.541816583 +0000 UTC m=+33.818533846 (delta=82.196975ms)
	I0318 21:34:13.653942   41726 fix.go:200] guest clock delta is within tolerance: 82.196975ms
	I0318 21:34:13.653954   41726 start.go:83] releasing machines lock for "test-preload-524155", held for 19.881560221s
	I0318 21:34:13.653972   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:13.654219   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetIP
	I0318 21:34:13.656709   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.657052   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:13.657079   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.657237   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:13.657671   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:13.657818   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:13.657892   41726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:34:13.657944   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:13.657978   41726 ssh_runner.go:195] Run: cat /version.json
	I0318 21:34:13.658002   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:13.660694   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.660752   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.661068   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:13.661097   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.661125   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:13.661146   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:13.661207   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:13.661370   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:13.661447   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.661582   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:13.661653   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:13.661748   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:13.661802   41726 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa Username:docker}
	I0318 21:34:13.661853   41726 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa Username:docker}
	I0318 21:34:13.737650   41726 ssh_runner.go:195] Run: systemctl --version
	I0318 21:34:13.764210   41726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:34:13.908878   41726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:34:13.915904   41726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:34:13.915957   41726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:34:13.932847   41726 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:34:13.932865   41726 start.go:494] detecting cgroup driver to use...
	I0318 21:34:13.932936   41726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:34:13.948931   41726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:34:13.963266   41726 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:34:13.963309   41726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:34:13.977029   41726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:34:13.990977   41726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:34:14.110815   41726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:34:14.279056   41726 docker.go:233] disabling docker service ...
	I0318 21:34:14.279139   41726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:34:14.294073   41726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:34:14.307668   41726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:34:14.434162   41726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:34:14.564710   41726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:34:14.579625   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:34:14.599441   41726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0318 21:34:14.599494   41726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.610311   41726 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:34:14.610360   41726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.621304   41726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.632135   41726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.642844   41726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:34:14.653754   41726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.664346   41726 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.683460   41726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:34:14.696155   41726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:34:14.707722   41726 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:34:14.707763   41726 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:34:14.722244   41726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:34:14.732723   41726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:34:14.858221   41726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:34:14.997608   41726 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:34:14.997679   41726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:34:15.002870   41726 start.go:562] Will wait 60s for crictl version
	I0318 21:34:15.002923   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:15.007127   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:34:15.042455   41726 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:34:15.042548   41726 ssh_runner.go:195] Run: crio --version
	I0318 21:34:15.075154   41726 ssh_runner.go:195] Run: crio --version
	I0318 21:34:15.105794   41726 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0318 21:34:15.107206   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetIP
	I0318 21:34:15.109653   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:15.109985   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:15.110020   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:15.110183   41726 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:34:15.114339   41726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:34:15.128546   41726 kubeadm.go:877] updating cluster {Name:test-preload-524155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-524155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:34:15.128670   41726 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0318 21:34:15.128734   41726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:34:15.169736   41726 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0318 21:34:15.169797   41726 ssh_runner.go:195] Run: which lz4
	I0318 21:34:15.174056   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:34:15.178502   41726 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:34:15.178522   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0318 21:34:16.978053   41726 crio.go:462] duration metric: took 1.804029492s to copy over tarball
	I0318 21:34:16.978115   41726 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:34:19.633697   41726 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.655554444s)
	I0318 21:34:19.633728   41726 crio.go:469] duration metric: took 2.655646441s to extract the tarball
	I0318 21:34:19.633737   41726 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:34:19.676105   41726 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:34:19.721967   41726 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0318 21:34:19.721986   41726 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:34:19.722034   41726 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:34:19.722067   41726 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 21:34:19.722083   41726 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 21:34:19.722115   41726 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 21:34:19.722121   41726 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 21:34:19.722067   41726 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 21:34:19.722162   41726 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 21:34:19.722099   41726 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 21:34:19.723582   41726 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 21:34:19.723593   41726 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 21:34:19.723602   41726 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 21:34:19.723620   41726 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 21:34:19.723635   41726 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 21:34:19.723646   41726 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 21:34:19.723641   41726 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 21:34:19.723647   41726 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:34:19.874363   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 21:34:19.875702   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 21:34:19.876509   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0318 21:34:19.877082   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 21:34:19.878456   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0318 21:34:19.881693   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 21:34:19.949416   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0318 21:34:20.052862   41726 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0318 21:34:20.052925   41726 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0318 21:34:20.052967   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.059277   41726 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0318 21:34:20.059295   41726 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0318 21:34:20.059316   41726 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 21:34:20.059322   41726 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 21:34:20.059325   41726 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0318 21:34:20.059350   41726 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 21:34:20.059424   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.059426   41726 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0318 21:34:20.059510   41726 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 21:34:20.059537   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.059363   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.059364   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.059486   41726 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0318 21:34:20.059647   41726 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 21:34:20.059672   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.095786   41726 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0318 21:34:20.095826   41726 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 21:34:20.095873   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0318 21:34:20.095921   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 21:34:20.095876   41726 ssh_runner.go:195] Run: which crictl
	I0318 21:34:20.095925   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0318 21:34:20.095980   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0318 21:34:20.096031   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0318 21:34:20.096085   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 21:34:20.249805   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0318 21:34:20.249868   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0318 21:34:20.249926   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0318 21:34:20.249943   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 21:34:20.249963   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0318 21:34:20.250012   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 21:34:20.250048   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0318 21:34:20.250078   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0318 21:34:20.250121   41726 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0318 21:34:20.250139   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0318 21:34:20.250125   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0318 21:34:20.250188   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0318 21:34:20.250248   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0318 21:34:20.260311   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0318 21:34:20.260331   41726 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0318 21:34:20.260368   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0318 21:34:20.265575   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0318 21:34:20.268454   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0318 21:34:20.268616   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0318 21:34:20.268947   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0318 21:34:20.316311   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0318 21:34:20.316404   41726 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0318 21:34:20.316504   41726 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0318 21:34:20.677040   41726 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:34:22.743817   41726 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.483422222s)
	I0318 21:34:22.743855   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0318 21:34:22.743870   41726 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.427337552s)
	I0318 21:34:22.743880   41726 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0318 21:34:22.743899   41726 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0318 21:34:22.743954   41726 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.066858243s)
	I0318 21:34:22.743961   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0318 21:34:24.896358   41726 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.152370668s)
	I0318 21:34:24.896380   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0318 21:34:24.896404   41726 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 21:34:24.896442   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0318 21:34:25.358760   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 21:34:25.358832   41726 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 21:34:25.358887   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0318 21:34:25.505793   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0318 21:34:25.505828   41726 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0318 21:34:25.505890   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0318 21:34:25.961888   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0318 21:34:25.961929   41726 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0318 21:34:25.961978   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0318 21:34:26.716810   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0318 21:34:26.716859   41726 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0318 21:34:26.716944   41726 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0318 21:34:27.561487   41726 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0318 21:34:27.561535   41726 cache_images.go:123] Successfully loaded all cached images
	I0318 21:34:27.561542   41726 cache_images.go:92] duration metric: took 7.839545787s to LoadCachedImages
	I0318 21:34:27.561571   41726 kubeadm.go:928] updating node { 192.168.39.10 8443 v1.24.4 crio true true} ...
	I0318 21:34:27.561704   41726 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-524155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-524155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:34:27.561765   41726 ssh_runner.go:195] Run: crio config
	I0318 21:34:27.609472   41726 cni.go:84] Creating CNI manager for ""
	I0318 21:34:27.609493   41726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:34:27.609504   41726 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:34:27.609526   41726 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-524155 NodeName:test-preload-524155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:34:27.609676   41726 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-524155"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:34:27.609742   41726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0318 21:34:27.620889   41726 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:34:27.620957   41726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:34:27.631474   41726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0318 21:34:27.649303   41726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:34:27.666771   41726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0318 21:34:27.685022   41726 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0318 21:34:27.689076   41726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:34:27.702530   41726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:34:27.839595   41726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:34:27.858926   41726 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155 for IP: 192.168.39.10
	I0318 21:34:27.858944   41726 certs.go:194] generating shared ca certs ...
	I0318 21:34:27.858963   41726 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:34:27.859154   41726 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:34:27.859206   41726 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:34:27.859216   41726 certs.go:256] generating profile certs ...
	I0318 21:34:27.859299   41726 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/client.key
	I0318 21:34:27.859387   41726 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/apiserver.key.668a3044
	I0318 21:34:27.859436   41726 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/proxy-client.key
	I0318 21:34:27.859548   41726 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:34:27.859586   41726 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:34:27.859601   41726 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:34:27.859629   41726 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:34:27.859709   41726 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:34:27.859734   41726 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:34:27.859776   41726 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:34:27.860499   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:34:27.898876   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:34:27.932290   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:34:27.971033   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:34:28.008291   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:34:28.044229   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:34:28.078235   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:34:28.103158   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:34:28.127802   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:34:28.152333   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:34:28.176667   41726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:34:28.200961   41726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:34:28.218965   41726 ssh_runner.go:195] Run: openssl version
	I0318 21:34:28.224836   41726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:34:28.236733   41726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:34:28.241529   41726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:34:28.241595   41726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:34:28.247462   41726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:34:28.259517   41726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:34:28.271463   41726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:34:28.276291   41726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:34:28.276341   41726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:34:28.282284   41726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:34:28.294452   41726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:34:28.306531   41726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:34:28.311259   41726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:34:28.311307   41726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:34:28.317297   41726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:34:28.329538   41726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:34:28.334372   41726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:34:28.340515   41726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:34:28.346604   41726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:34:28.352872   41726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:34:28.358917   41726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:34:28.364916   41726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:34:28.370873   41726 kubeadm.go:391] StartCluster: {Name:test-preload-524155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-524155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:34:28.370951   41726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:34:28.370989   41726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:34:28.415221   41726 cri.go:89] found id: ""
	I0318 21:34:28.415295   41726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:34:28.427418   41726 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:34:28.427442   41726 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:34:28.427448   41726 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:34:28.427502   41726 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:34:28.438468   41726 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:34:28.438938   41726 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-524155" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:34:28.439067   41726 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-524155" cluster setting kubeconfig missing "test-preload-524155" context setting]
	I0318 21:34:28.439324   41726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:34:28.439945   41726 kapi.go:59] client config for test-preload-524155: &rest.Config{Host:"https://192.168.39.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 21:34:28.440485   41726 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:34:28.451120   41726 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.10
	I0318 21:34:28.451162   41726 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:34:28.451177   41726 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:34:28.451219   41726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:34:28.490551   41726 cri.go:89] found id: ""
	I0318 21:34:28.490597   41726 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:34:28.508886   41726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:34:28.519929   41726 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:34:28.519944   41726 kubeadm.go:156] found existing configuration files:
	
	I0318 21:34:28.519975   41726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:34:28.530056   41726 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:34:28.530121   41726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:34:28.540586   41726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:34:28.550612   41726 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:34:28.550665   41726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:34:28.560895   41726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:34:28.570834   41726 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:34:28.570890   41726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:34:28.581096   41726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:34:28.590991   41726 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:34:28.591050   41726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:34:28.601268   41726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:34:28.611834   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:34:28.715882   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:34:29.554695   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:34:29.844368   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:34:29.902598   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:34:29.973425   41726 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:34:29.973502   41726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:34:30.474372   41726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:34:30.974408   41726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:34:31.473950   41726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:34:31.492203   41726 api_server.go:72] duration metric: took 1.518779908s to wait for apiserver process to appear ...
	I0318 21:34:31.492224   41726 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:34:31.492240   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:31.492708   41726 api_server.go:269] stopped: https://192.168.39.10:8443/healthz: Get "https://192.168.39.10:8443/healthz": dial tcp 192.168.39.10:8443: connect: connection refused
	I0318 21:34:31.992418   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:34.655179   41726 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:34:34.655202   41726 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:34:34.655215   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:34.663382   41726 api_server.go:279] https://192.168.39.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:34:34.663413   41726 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:34:34.992772   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:35.003397   41726 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0318 21:34:35.003436   41726 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0318 21:34:35.492983   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:35.501497   41726 api_server.go:279] https://192.168.39.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0318 21:34:35.501527   41726 api_server.go:103] status: https://192.168.39.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0318 21:34:35.993137   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:36.008783   41726 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0318 21:34:36.016070   41726 api_server.go:141] control plane version: v1.24.4
	I0318 21:34:36.016092   41726 api_server.go:131] duration metric: took 4.523863369s to wait for apiserver health ...
	I0318 21:34:36.016101   41726 cni.go:84] Creating CNI manager for ""
	I0318 21:34:36.016108   41726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:34:36.017743   41726 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:34:36.019177   41726 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:34:36.036433   41726 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:34:36.066141   41726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:34:36.074843   41726 system_pods.go:59] 8 kube-system pods found
	I0318 21:34:36.074880   41726 system_pods.go:61] "coredns-6d4b75cb6d-f7qct" [0faf63cd-2f69-4252-b153-4eb1e7070ff4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:34:36.074888   41726 system_pods.go:61] "coredns-6d4b75cb6d-k6455" [9a3179ac-8035-417d-9eda-c62cfa856a51] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:34:36.074898   41726 system_pods.go:61] "etcd-test-preload-524155" [7bed48d6-cb17-4ad4-8919-3c29fe72c342] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:34:36.074911   41726 system_pods.go:61] "kube-apiserver-test-preload-524155" [9a8008ed-0819-4874-9f8c-d8526bc1d7dc] Running
	I0318 21:34:36.074919   41726 system_pods.go:61] "kube-controller-manager-test-preload-524155" [168c23db-faef-4116-b37f-c68c11eafd29] Running
	I0318 21:34:36.074926   41726 system_pods.go:61] "kube-proxy-w9f2x" [7a980559-403e-435c-8d08-2e0c92cdd4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:34:36.074935   41726 system_pods.go:61] "kube-scheduler-test-preload-524155" [7ebea108-d249-4322-86ae-6860dc72e175] Running
	I0318 21:34:36.074941   41726 system_pods.go:61] "storage-provisioner" [11c8970f-b4de-4ada-b8bc-7c446868d4db] Running
	I0318 21:34:36.074949   41726 system_pods.go:74] duration metric: took 8.791631ms to wait for pod list to return data ...
	I0318 21:34:36.074956   41726 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:34:36.079501   41726 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:34:36.079531   41726 node_conditions.go:123] node cpu capacity is 2
	I0318 21:34:36.079542   41726 node_conditions.go:105] duration metric: took 4.579283ms to run NodePressure ...
	I0318 21:34:36.079564   41726 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:34:36.342231   41726 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:34:36.348070   41726 kubeadm.go:733] kubelet initialised
	I0318 21:34:36.348089   41726 kubeadm.go:734] duration metric: took 5.837904ms waiting for restarted kubelet to initialise ...
	I0318 21:34:36.348095   41726 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:34:36.352749   41726 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-f7qct" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:36.357123   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "coredns-6d4b75cb6d-f7qct" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.357148   41726 pod_ready.go:81] duration metric: took 4.377142ms for pod "coredns-6d4b75cb6d-f7qct" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:36.357160   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "coredns-6d4b75cb6d-f7qct" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.357169   41726 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:36.361463   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.361483   41726 pod_ready.go:81] duration metric: took 4.296831ms for pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:36.361493   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.361500   41726 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:36.365846   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "etcd-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.365870   41726 pod_ready.go:81] duration metric: took 4.353179ms for pod "etcd-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:36.365880   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "etcd-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.365888   41726 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:36.469241   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "kube-apiserver-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.469293   41726 pod_ready.go:81] duration metric: took 103.387382ms for pod "kube-apiserver-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:36.469307   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "kube-apiserver-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.469316   41726 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:36.872762   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.872788   41726 pod_ready.go:81] duration metric: took 403.460093ms for pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:36.872797   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:36.872803   41726 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9f2x" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:37.271190   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "kube-proxy-w9f2x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:37.271219   41726 pod_ready.go:81] duration metric: took 398.407193ms for pod "kube-proxy-w9f2x" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:37.271228   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "kube-proxy-w9f2x" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:37.271237   41726 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:37.670447   41726 pod_ready.go:97] node "test-preload-524155" hosting pod "kube-scheduler-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:37.670475   41726 pod_ready.go:81] duration metric: took 399.231951ms for pod "kube-scheduler-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	E0318 21:34:37.670484   41726 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-524155" hosting pod "kube-scheduler-test-preload-524155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:37.670497   41726 pod_ready.go:38] duration metric: took 1.32238784s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:34:37.670514   41726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:34:37.683713   41726 ops.go:34] apiserver oom_adj: -16
	I0318 21:34:37.683730   41726 kubeadm.go:591] duration metric: took 9.256276545s to restartPrimaryControlPlane
	I0318 21:34:37.683737   41726 kubeadm.go:393] duration metric: took 9.312868477s to StartCluster
	I0318 21:34:37.683750   41726 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:34:37.683821   41726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:34:37.684422   41726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:34:37.684672   41726 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:34:37.686318   41726 out.go:177] * Verifying Kubernetes components...
	I0318 21:34:37.684740   41726 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:34:37.684894   41726 config.go:182] Loaded profile config "test-preload-524155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0318 21:34:37.687583   41726 addons.go:69] Setting storage-provisioner=true in profile "test-preload-524155"
	I0318 21:34:37.687624   41726 addons.go:234] Setting addon storage-provisioner=true in "test-preload-524155"
	W0318 21:34:37.687636   41726 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:34:37.687658   41726 host.go:66] Checking if "test-preload-524155" exists ...
	I0318 21:34:37.687587   41726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:34:37.687587   41726 addons.go:69] Setting default-storageclass=true in profile "test-preload-524155"
	I0318 21:34:37.687778   41726 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-524155"
	I0318 21:34:37.687990   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:34:37.688029   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:34:37.688157   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:34:37.688197   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:34:37.701964   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I0318 21:34:37.702221   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
	I0318 21:34:37.702361   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:34:37.702620   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:34:37.702812   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:34:37.702831   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:34:37.703083   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:34:37.703094   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:34:37.703135   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:34:37.703283   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetState
	I0318 21:34:37.703412   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:34:37.703931   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:34:37.703963   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:34:37.705741   41726 kapi.go:59] client config for test-preload-524155: &rest.Config{Host:"https://192.168.39.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/client.crt", KeyFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/profiles/test-preload-524155/client.key", CAFile:"/home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 21:34:37.706044   41726 addons.go:234] Setting addon default-storageclass=true in "test-preload-524155"
	W0318 21:34:37.706066   41726 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:34:37.706090   41726 host.go:66] Checking if "test-preload-524155" exists ...
	I0318 21:34:37.706459   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:34:37.706518   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:34:37.717579   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0318 21:34:37.718055   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:34:37.718578   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:34:37.718601   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:34:37.718916   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:34:37.719111   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetState
	I0318 21:34:37.719856   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0318 21:34:37.720311   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:34:37.720677   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:37.720798   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:34:37.720818   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:34:37.722514   41726 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:34:37.721152   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:34:37.722932   41726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:34:37.723733   41726 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:34:37.723749   41726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:34:37.723757   41726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:34:37.723767   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:37.726620   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:37.727077   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:37.727239   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:37.727263   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:37.727403   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:37.727524   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:37.727625   41726 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa Username:docker}
	I0318 21:34:37.738028   41726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0318 21:34:37.738424   41726 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:34:37.738859   41726 main.go:141] libmachine: Using API Version  1
	I0318 21:34:37.738881   41726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:34:37.739171   41726 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:34:37.739358   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetState
	I0318 21:34:37.740705   41726 main.go:141] libmachine: (test-preload-524155) Calling .DriverName
	I0318 21:34:37.740962   41726 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:34:37.740982   41726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:34:37.741000   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHHostname
	I0318 21:34:37.743424   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:37.743778   41726 main.go:141] libmachine: (test-preload-524155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:dc:08", ip: ""} in network mk-test-preload-524155: {Iface:virbr1 ExpiryTime:2024-03-18 22:34:05 +0000 UTC Type:0 Mac:52:54:00:4e:dc:08 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:test-preload-524155 Clientid:01:52:54:00:4e:dc:08}
	I0318 21:34:37.743803   41726 main.go:141] libmachine: (test-preload-524155) DBG | domain test-preload-524155 has defined IP address 192.168.39.10 and MAC address 52:54:00:4e:dc:08 in network mk-test-preload-524155
	I0318 21:34:37.743911   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHPort
	I0318 21:34:37.744086   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHKeyPath
	I0318 21:34:37.744242   41726 main.go:141] libmachine: (test-preload-524155) Calling .GetSSHUsername
	I0318 21:34:37.744370   41726 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/test-preload-524155/id_rsa Username:docker}
	I0318 21:34:37.868328   41726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:34:37.888583   41726 node_ready.go:35] waiting up to 6m0s for node "test-preload-524155" to be "Ready" ...
	I0318 21:34:37.957208   41726 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:34:37.963628   41726 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:34:38.902516   41726 main.go:141] libmachine: Making call to close driver server
	I0318 21:34:38.902547   41726 main.go:141] libmachine: (test-preload-524155) Calling .Close
	I0318 21:34:38.902555   41726 main.go:141] libmachine: Making call to close driver server
	I0318 21:34:38.902571   41726 main.go:141] libmachine: (test-preload-524155) Calling .Close
	I0318 21:34:38.902827   41726 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:34:38.902835   41726 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:34:38.902847   41726 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:34:38.902849   41726 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:34:38.902855   41726 main.go:141] libmachine: Making call to close driver server
	I0318 21:34:38.902860   41726 main.go:141] libmachine: Making call to close driver server
	I0318 21:34:38.902862   41726 main.go:141] libmachine: (test-preload-524155) Calling .Close
	I0318 21:34:38.902869   41726 main.go:141] libmachine: (test-preload-524155) Calling .Close
	I0318 21:34:38.903044   41726 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:34:38.903059   41726 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:34:38.903161   41726 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:34:38.903167   41726 main.go:141] libmachine: (test-preload-524155) DBG | Closing plugin on server side
	I0318 21:34:38.903175   41726 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:34:38.908620   41726 main.go:141] libmachine: Making call to close driver server
	I0318 21:34:38.908635   41726 main.go:141] libmachine: (test-preload-524155) Calling .Close
	I0318 21:34:38.908825   41726 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:34:38.908835   41726 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:34:38.910703   41726 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 21:34:38.911751   41726 addons.go:505] duration metric: took 1.227008322s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 21:34:39.894091   41726 node_ready.go:53] node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:42.392671   41726 node_ready.go:53] node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:44.393363   41726 node_ready.go:53] node "test-preload-524155" has status "Ready":"False"
	I0318 21:34:45.392160   41726 node_ready.go:49] node "test-preload-524155" has status "Ready":"True"
	I0318 21:34:45.392182   41726 node_ready.go:38] duration metric: took 7.503565743s for node "test-preload-524155" to be "Ready" ...
	I0318 21:34:45.392190   41726 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:34:45.398014   41726 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.404023   41726 pod_ready.go:92] pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace has status "Ready":"True"
	I0318 21:34:45.404039   41726 pod_ready.go:81] duration metric: took 6.002653ms for pod "coredns-6d4b75cb6d-k6455" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.404050   41726 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.409089   41726 pod_ready.go:92] pod "etcd-test-preload-524155" in "kube-system" namespace has status "Ready":"True"
	I0318 21:34:45.409110   41726 pod_ready.go:81] duration metric: took 5.044281ms for pod "etcd-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.409131   41726 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.413432   41726 pod_ready.go:92] pod "kube-apiserver-test-preload-524155" in "kube-system" namespace has status "Ready":"True"
	I0318 21:34:45.413447   41726 pod_ready.go:81] duration metric: took 4.305257ms for pod "kube-apiserver-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.413457   41726 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.418020   41726 pod_ready.go:92] pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace has status "Ready":"True"
	I0318 21:34:45.418031   41726 pod_ready.go:81] duration metric: took 4.564274ms for pod "kube-controller-manager-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.418040   41726 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w9f2x" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.793881   41726 pod_ready.go:92] pod "kube-proxy-w9f2x" in "kube-system" namespace has status "Ready":"True"
	I0318 21:34:45.793901   41726 pod_ready.go:81] duration metric: took 375.850557ms for pod "kube-proxy-w9f2x" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:45.793910   41726 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:47.801298   41726 pod_ready.go:92] pod "kube-scheduler-test-preload-524155" in "kube-system" namespace has status "Ready":"True"
	I0318 21:34:47.801317   41726 pod_ready.go:81] duration metric: took 2.007401316s for pod "kube-scheduler-test-preload-524155" in "kube-system" namespace to be "Ready" ...
	I0318 21:34:47.801327   41726 pod_ready.go:38] duration metric: took 2.40912642s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:34:47.801342   41726 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:34:47.801393   41726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:34:47.821501   41726 api_server.go:72] duration metric: took 10.136799804s to wait for apiserver process to appear ...
	I0318 21:34:47.821519   41726 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:34:47.821540   41726 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0318 21:34:47.826487   41726 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0318 21:34:47.827327   41726 api_server.go:141] control plane version: v1.24.4
	I0318 21:34:47.827346   41726 api_server.go:131] duration metric: took 5.822029ms to wait for apiserver health ...
	I0318 21:34:47.827353   41726 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:34:47.996129   41726 system_pods.go:59] 7 kube-system pods found
	I0318 21:34:47.996163   41726 system_pods.go:61] "coredns-6d4b75cb6d-k6455" [9a3179ac-8035-417d-9eda-c62cfa856a51] Running
	I0318 21:34:47.996170   41726 system_pods.go:61] "etcd-test-preload-524155" [7bed48d6-cb17-4ad4-8919-3c29fe72c342] Running
	I0318 21:34:47.996176   41726 system_pods.go:61] "kube-apiserver-test-preload-524155" [9a8008ed-0819-4874-9f8c-d8526bc1d7dc] Running
	I0318 21:34:47.996181   41726 system_pods.go:61] "kube-controller-manager-test-preload-524155" [168c23db-faef-4116-b37f-c68c11eafd29] Running
	I0318 21:34:47.996186   41726 system_pods.go:61] "kube-proxy-w9f2x" [7a980559-403e-435c-8d08-2e0c92cdd4a8] Running
	I0318 21:34:47.996191   41726 system_pods.go:61] "kube-scheduler-test-preload-524155" [7ebea108-d249-4322-86ae-6860dc72e175] Running
	I0318 21:34:47.996201   41726 system_pods.go:61] "storage-provisioner" [11c8970f-b4de-4ada-b8bc-7c446868d4db] Running
	I0318 21:34:47.996212   41726 system_pods.go:74] duration metric: took 168.852951ms to wait for pod list to return data ...
	I0318 21:34:47.996224   41726 default_sa.go:34] waiting for default service account to be created ...
	I0318 21:34:48.193703   41726 default_sa.go:45] found service account: "default"
	I0318 21:34:48.193728   41726 default_sa.go:55] duration metric: took 197.496502ms for default service account to be created ...
	I0318 21:34:48.193735   41726 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 21:34:48.395974   41726 system_pods.go:86] 7 kube-system pods found
	I0318 21:34:48.396000   41726 system_pods.go:89] "coredns-6d4b75cb6d-k6455" [9a3179ac-8035-417d-9eda-c62cfa856a51] Running
	I0318 21:34:48.396005   41726 system_pods.go:89] "etcd-test-preload-524155" [7bed48d6-cb17-4ad4-8919-3c29fe72c342] Running
	I0318 21:34:48.396009   41726 system_pods.go:89] "kube-apiserver-test-preload-524155" [9a8008ed-0819-4874-9f8c-d8526bc1d7dc] Running
	I0318 21:34:48.396019   41726 system_pods.go:89] "kube-controller-manager-test-preload-524155" [168c23db-faef-4116-b37f-c68c11eafd29] Running
	I0318 21:34:48.396022   41726 system_pods.go:89] "kube-proxy-w9f2x" [7a980559-403e-435c-8d08-2e0c92cdd4a8] Running
	I0318 21:34:48.396026   41726 system_pods.go:89] "kube-scheduler-test-preload-524155" [7ebea108-d249-4322-86ae-6860dc72e175] Running
	I0318 21:34:48.396029   41726 system_pods.go:89] "storage-provisioner" [11c8970f-b4de-4ada-b8bc-7c446868d4db] Running
	I0318 21:34:48.396035   41726 system_pods.go:126] duration metric: took 202.295633ms to wait for k8s-apps to be running ...
	I0318 21:34:48.396042   41726 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 21:34:48.396082   41726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:34:48.412220   41726 system_svc.go:56] duration metric: took 16.167998ms WaitForService to wait for kubelet
	I0318 21:34:48.412247   41726 kubeadm.go:576] duration metric: took 10.727548126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:34:48.412264   41726 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:34:48.592366   41726 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:34:48.592387   41726 node_conditions.go:123] node cpu capacity is 2
	I0318 21:34:48.592397   41726 node_conditions.go:105] duration metric: took 180.128609ms to run NodePressure ...
	I0318 21:34:48.592408   41726 start.go:240] waiting for startup goroutines ...
	I0318 21:34:48.592414   41726 start.go:245] waiting for cluster config update ...
	I0318 21:34:48.592424   41726 start.go:254] writing updated cluster config ...
	I0318 21:34:48.592677   41726 ssh_runner.go:195] Run: rm -f paused
	I0318 21:34:48.638200   41726 start.go:600] kubectl: 1.29.3, cluster: 1.24.4 (minor skew: 5)
	I0318 21:34:48.639940   41726 out.go:177] 
	W0318 21:34:48.641108   41726 out.go:239] ! /usr/local/bin/kubectl is version 1.29.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0318 21:34:48.642334   41726 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0318 21:34:48.643688   41726 out.go:177] * Done! kubectl is now configured to use "test-preload-524155" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.574379643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797689574360616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b8aa20f-2f06-4b8a-a23b-ca616f48cbd5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.574998763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb075b8c-08de-4935-960e-3b1a45f0514f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.575049660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb075b8c-08de-4935-960e-3b1a45f0514f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.575204363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628a767e7394ccaeae5669f9343a61dffca9b670aefe30db8039c298ee58187b,PodSandboxId:e8e4bdaade87f8dd700df6757163c74e6bcf11850d321224efca15b6b100cdc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710797683364814655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-k6455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3179ac-8035-417d-9eda-c62cfa856a51,},Annotations:map[string]string{io.kubernetes.container.hash: e56f2bf9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f223af532a75d9acc70ae5ef0f6dbecaeb7d2effe2cc883e7828f61b0dbc4ef8,PodSandboxId:ca602e96cf99e5aaf78bf6d966e3913d3cf2ae918ae97e3bc874279a048c0e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710797676594805487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a980559-403e-435c-8d08-2e0c92cdd4a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1554e763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae78b9cbc06ba3b6425b29e7d1abbbdd4c58c3f429ef4e3ce411db0bb18153bf,PodSandboxId:fb5b306da41e8fd363dad685d9ecb3b4cf142ce0cdfa506ad15a107a6ea92522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710797676128827750,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
c8970f-b4de-4ada-b8bc-7c446868d4db,},Annotations:map[string]string{io.kubernetes.container.hash: efebcfd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987d48ce0ae3645fc9e89d60206f211848921ab2abc1245604b072daa61e8ed3,PodSandboxId:a4e5661faf4ed06515823f11695039e4a1d199e20393d431386bda2989d1d30d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710797670799978523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d86188c
fb37a6c967cc2c685b11315,},Annotations:map[string]string{io.kubernetes.container.hash: 946dcb37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b65cf687668e29d7b6d16be2cb726306395fd1e50bc3210abd42a86f5d36ae9,PodSandboxId:78fe40389583ac7c6a5754949c91c1fd03995d8ea63c058afaa8ccb918a7e9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710797670741351954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fb8935872182f6ab64
9421c88d971a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c7e6db8e4df2616554e2ccf5acdee04ffca7e11eb4eb8a104b92d21ee519b,PodSandboxId:f862d9f4c3da5664fdad0e7118a50dca7de4f2db51b444d9a47e2d3956e1ee5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710797670766129225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7b3
3da6143edc0183658cfd8169c86c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae177c04f4f1a92314c289db5fabee8a28d9fcda507bd47c8320ab77ca438227,PodSandboxId:9b48428edff1a361a90ff78f3938f497761350820d482ba51841acee423582fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710797670746976977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e8ef5a01fb886e167c0ba574b256fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4197f2b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb075b8c-08de-4935-960e-3b1a45f0514f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.618985676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=031859fb-ea6f-43f1-b8b7-e880f8f84e3d name=/runtime.v1.RuntimeService/Version
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.619059938Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=031859fb-ea6f-43f1-b8b7-e880f8f84e3d name=/runtime.v1.RuntimeService/Version
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.621105806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac2147c6-4ef4-4418-809b-afab97d4ba5a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.621538850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797689621515792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac2147c6-4ef4-4418-809b-afab97d4ba5a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.622267013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b955b183-4cc0-4b5f-b2bf-7dcd43d06c6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.622320926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b955b183-4cc0-4b5f-b2bf-7dcd43d06c6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.623015860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628a767e7394ccaeae5669f9343a61dffca9b670aefe30db8039c298ee58187b,PodSandboxId:e8e4bdaade87f8dd700df6757163c74e6bcf11850d321224efca15b6b100cdc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710797683364814655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-k6455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3179ac-8035-417d-9eda-c62cfa856a51,},Annotations:map[string]string{io.kubernetes.container.hash: e56f2bf9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f223af532a75d9acc70ae5ef0f6dbecaeb7d2effe2cc883e7828f61b0dbc4ef8,PodSandboxId:ca602e96cf99e5aaf78bf6d966e3913d3cf2ae918ae97e3bc874279a048c0e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710797676594805487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a980559-403e-435c-8d08-2e0c92cdd4a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1554e763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae78b9cbc06ba3b6425b29e7d1abbbdd4c58c3f429ef4e3ce411db0bb18153bf,PodSandboxId:fb5b306da41e8fd363dad685d9ecb3b4cf142ce0cdfa506ad15a107a6ea92522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710797676128827750,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
c8970f-b4de-4ada-b8bc-7c446868d4db,},Annotations:map[string]string{io.kubernetes.container.hash: efebcfd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987d48ce0ae3645fc9e89d60206f211848921ab2abc1245604b072daa61e8ed3,PodSandboxId:a4e5661faf4ed06515823f11695039e4a1d199e20393d431386bda2989d1d30d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710797670799978523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d86188c
fb37a6c967cc2c685b11315,},Annotations:map[string]string{io.kubernetes.container.hash: 946dcb37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b65cf687668e29d7b6d16be2cb726306395fd1e50bc3210abd42a86f5d36ae9,PodSandboxId:78fe40389583ac7c6a5754949c91c1fd03995d8ea63c058afaa8ccb918a7e9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710797670741351954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fb8935872182f6ab64
9421c88d971a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c7e6db8e4df2616554e2ccf5acdee04ffca7e11eb4eb8a104b92d21ee519b,PodSandboxId:f862d9f4c3da5664fdad0e7118a50dca7de4f2db51b444d9a47e2d3956e1ee5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710797670766129225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7b3
3da6143edc0183658cfd8169c86c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae177c04f4f1a92314c289db5fabee8a28d9fcda507bd47c8320ab77ca438227,PodSandboxId:9b48428edff1a361a90ff78f3938f497761350820d482ba51841acee423582fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710797670746976977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e8ef5a01fb886e167c0ba574b256fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4197f2b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b955b183-4cc0-4b5f-b2bf-7dcd43d06c6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.664981945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=710b7e55-64d8-4348-b2bb-ea33cab54dba name=/runtime.v1.RuntimeService/Version
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.665053067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=710b7e55-64d8-4348-b2bb-ea33cab54dba name=/runtime.v1.RuntimeService/Version
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.666190761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b671ad6-4c3a-4a15-be30-ebc98ee83d38 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.666608294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797689666576262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b671ad6-4c3a-4a15-be30-ebc98ee83d38 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.667444781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c49c015-c8de-4e6b-acd4-49ce5ffa94ff name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.667492967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c49c015-c8de-4e6b-acd4-49ce5ffa94ff name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.667771629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628a767e7394ccaeae5669f9343a61dffca9b670aefe30db8039c298ee58187b,PodSandboxId:e8e4bdaade87f8dd700df6757163c74e6bcf11850d321224efca15b6b100cdc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710797683364814655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-k6455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3179ac-8035-417d-9eda-c62cfa856a51,},Annotations:map[string]string{io.kubernetes.container.hash: e56f2bf9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f223af532a75d9acc70ae5ef0f6dbecaeb7d2effe2cc883e7828f61b0dbc4ef8,PodSandboxId:ca602e96cf99e5aaf78bf6d966e3913d3cf2ae918ae97e3bc874279a048c0e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710797676594805487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a980559-403e-435c-8d08-2e0c92cdd4a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1554e763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae78b9cbc06ba3b6425b29e7d1abbbdd4c58c3f429ef4e3ce411db0bb18153bf,PodSandboxId:fb5b306da41e8fd363dad685d9ecb3b4cf142ce0cdfa506ad15a107a6ea92522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710797676128827750,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
c8970f-b4de-4ada-b8bc-7c446868d4db,},Annotations:map[string]string{io.kubernetes.container.hash: efebcfd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987d48ce0ae3645fc9e89d60206f211848921ab2abc1245604b072daa61e8ed3,PodSandboxId:a4e5661faf4ed06515823f11695039e4a1d199e20393d431386bda2989d1d30d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710797670799978523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d86188c
fb37a6c967cc2c685b11315,},Annotations:map[string]string{io.kubernetes.container.hash: 946dcb37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b65cf687668e29d7b6d16be2cb726306395fd1e50bc3210abd42a86f5d36ae9,PodSandboxId:78fe40389583ac7c6a5754949c91c1fd03995d8ea63c058afaa8ccb918a7e9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710797670741351954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fb8935872182f6ab64
9421c88d971a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c7e6db8e4df2616554e2ccf5acdee04ffca7e11eb4eb8a104b92d21ee519b,PodSandboxId:f862d9f4c3da5664fdad0e7118a50dca7de4f2db51b444d9a47e2d3956e1ee5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710797670766129225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7b3
3da6143edc0183658cfd8169c86c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae177c04f4f1a92314c289db5fabee8a28d9fcda507bd47c8320ab77ca438227,PodSandboxId:9b48428edff1a361a90ff78f3938f497761350820d482ba51841acee423582fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710797670746976977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e8ef5a01fb886e167c0ba574b256fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4197f2b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c49c015-c8de-4e6b-acd4-49ce5ffa94ff name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.701010598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16a3aa79-bc12-4f2e-a349-82a29e28ddc8 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.701075131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16a3aa79-bc12-4f2e-a349-82a29e28ddc8 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.701978439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c918f801-b9f1-4aab-a552-a0ee1db01177 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.702387529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710797689702369053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c918f801-b9f1-4aab-a552-a0ee1db01177 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.702930925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89c987a3-27e6-470d-adb4-ea4b0ce9e904 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.702979395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89c987a3-27e6-470d-adb4-ea4b0ce9e904 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:34:49 test-preload-524155 crio[687]: time="2024-03-18 21:34:49.703163577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:628a767e7394ccaeae5669f9343a61dffca9b670aefe30db8039c298ee58187b,PodSandboxId:e8e4bdaade87f8dd700df6757163c74e6bcf11850d321224efca15b6b100cdc2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710797683364814655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-k6455,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a3179ac-8035-417d-9eda-c62cfa856a51,},Annotations:map[string]string{io.kubernetes.container.hash: e56f2bf9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f223af532a75d9acc70ae5ef0f6dbecaeb7d2effe2cc883e7828f61b0dbc4ef8,PodSandboxId:ca602e96cf99e5aaf78bf6d966e3913d3cf2ae918ae97e3bc874279a048c0e47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710797676594805487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9f2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a980559-403e-435c-8d08-2e0c92cdd4a8,},Annotations:map[string]string{io.kubernetes.container.hash: 1554e763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae78b9cbc06ba3b6425b29e7d1abbbdd4c58c3f429ef4e3ce411db0bb18153bf,PodSandboxId:fb5b306da41e8fd363dad685d9ecb3b4cf142ce0cdfa506ad15a107a6ea92522,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710797676128827750,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
c8970f-b4de-4ada-b8bc-7c446868d4db,},Annotations:map[string]string{io.kubernetes.container.hash: efebcfd7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:987d48ce0ae3645fc9e89d60206f211848921ab2abc1245604b072daa61e8ed3,PodSandboxId:a4e5661faf4ed06515823f11695039e4a1d199e20393d431386bda2989d1d30d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710797670799978523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d86188c
fb37a6c967cc2c685b11315,},Annotations:map[string]string{io.kubernetes.container.hash: 946dcb37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b65cf687668e29d7b6d16be2cb726306395fd1e50bc3210abd42a86f5d36ae9,PodSandboxId:78fe40389583ac7c6a5754949c91c1fd03995d8ea63c058afaa8ccb918a7e9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710797670741351954,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fb8935872182f6ab64
9421c88d971a,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60c7e6db8e4df2616554e2ccf5acdee04ffca7e11eb4eb8a104b92d21ee519b,PodSandboxId:f862d9f4c3da5664fdad0e7118a50dca7de4f2db51b444d9a47e2d3956e1ee5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710797670766129225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7b3
3da6143edc0183658cfd8169c86c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae177c04f4f1a92314c289db5fabee8a28d9fcda507bd47c8320ab77ca438227,PodSandboxId:9b48428edff1a361a90ff78f3938f497761350820d482ba51841acee423582fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710797670746976977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-524155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40e8ef5a01fb886e167c0ba574b256fc,},Annotation
s:map[string]string{io.kubernetes.container.hash: 4197f2b5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89c987a3-27e6-470d-adb4-ea4b0ce9e904 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	628a767e7394c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   e8e4bdaade87f       coredns-6d4b75cb6d-k6455
	f223af532a75d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   ca602e96cf99e       kube-proxy-w9f2x
	ae78b9cbc06ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   fb5b306da41e8       storage-provisioner
	987d48ce0ae36       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   a4e5661faf4ed       kube-apiserver-test-preload-524155
	b60c7e6db8e4d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   f862d9f4c3da5       kube-controller-manager-test-preload-524155
	ae177c04f4f1a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   9b48428edff1a       etcd-test-preload-524155
	0b65cf687668e       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   78fe40389583a       kube-scheduler-test-preload-524155
	
	
	==> coredns [628a767e7394ccaeae5669f9343a61dffca9b670aefe30db8039c298ee58187b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40652 - 50599 "HINFO IN 3593114458926144864.2905250362403167741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015926485s
	
	
	==> describe nodes <==
	Name:               test-preload-524155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-524155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=test-preload-524155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T21_33_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:33:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-524155
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:34:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:34:44 +0000   Mon, 18 Mar 2024 21:33:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:34:44 +0000   Mon, 18 Mar 2024 21:33:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:34:44 +0000   Mon, 18 Mar 2024 21:33:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:34:44 +0000   Mon, 18 Mar 2024 21:34:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    test-preload-524155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0c9a81064234d5f9bc1904bd3e7190d
	  System UUID:                b0c9a810-6423-4d5f-9bc1-904bd3e7190d
	  Boot ID:                    0f5979f5-094c-4f44-a889-2f097dc77e65
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-k6455                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     83s
	  kube-system                 etcd-test-preload-524155                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-test-preload-524155             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-test-preload-524155    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-w9f2x                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-scheduler-test-preload-524155             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node test-preload-524155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node test-preload-524155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node test-preload-524155 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                85s                kubelet          Node test-preload-524155 status is now: NodeReady
	  Normal  RegisteredNode           83s                node-controller  Node test-preload-524155 event: Registered Node test-preload-524155 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-524155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-524155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-524155 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-524155 event: Registered Node test-preload-524155 in Controller
	
	
	==> dmesg <==
	[Mar18 21:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051334] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042848] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar18 21:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.403796] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.764071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.938436] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.057876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057680] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.206634] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.131167] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.284831] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[ +12.984421] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.061563] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.927234] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +6.310604] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.691021] systemd-fstab-generator[1704]: Ignoring "noauto" option for root device
	[  +5.467275] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [ae177c04f4f1a92314c289db5fabee8a28d9fcda507bd47c8320ab77ca438227] <==
	{"level":"info","ts":"2024-03-18T21:34:31.083Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f8926bd555ec3d0e","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-18T21:34:31.085Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-18T21:34:31.088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-03-18T21:34:31.088Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-03-18T21:34:31.088Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:34:31.088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:34:31.113Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T21:34:31.115Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T21:34:31.115Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T21:34:31.115Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-03-18T21:34:31.119Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2024-03-18T21:34:32.052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-03-18T21:34:32.054Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:test-preload-524155 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T21:34:32.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:34:32.055Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:34:32.056Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T21:34:32.060Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-03-18T21:34:32.060Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T21:34:32.060Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:34:50 up 0 min,  0 users,  load average: 0.41, 0.12, 0.04
	Linux test-preload-524155 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [987d48ce0ae3645fc9e89d60206f211848921ab2abc1245604b072daa61e8ed3] <==
	I0318 21:34:34.622887       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 21:34:34.623032       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 21:34:34.623176       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0318 21:34:34.623280       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0318 21:34:34.627185       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 21:34:34.643035       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 21:34:34.691507       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 21:34:34.692243       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0318 21:34:34.701809       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 21:34:34.703014       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0318 21:34:34.704175       1 cache.go:39] Caches are synced for autoregister controller
	E0318 21:34:34.705598       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0318 21:34:34.723332       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0318 21:34:34.727565       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0318 21:34:34.758250       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 21:34:35.269549       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0318 21:34:35.597962       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 21:34:36.222274       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0318 21:34:36.237584       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0318 21:34:36.290755       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0318 21:34:36.311572       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 21:34:36.324058       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 21:34:36.931489       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0318 21:34:47.208128       1 controller.go:611] quota admission added evaluator for: endpoints
	I0318 21:34:47.377434       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b60c7e6db8e4df2616554e2ccf5acdee04ffca7e11eb4eb8a104b92d21ee519b] <==
	I0318 21:34:47.197905       1 shared_informer.go:262] Caches are synced for expand
	I0318 21:34:47.200458       1 shared_informer.go:262] Caches are synced for endpoint
	I0318 21:34:47.261757       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	W0318 21:34:47.269772       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-524155" does not exist
	I0318 21:34:47.284610       1 shared_informer.go:262] Caches are synced for taint
	I0318 21:34:47.284830       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0318 21:34:47.284931       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-524155. Assuming now as a timestamp.
	I0318 21:34:47.284977       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0318 21:34:47.285266       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0318 21:34:47.285552       1 event.go:294] "Event occurred" object="test-preload-524155" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-524155 event: Registered Node test-preload-524155 in Controller"
	I0318 21:34:47.287189       1 shared_informer.go:262] Caches are synced for TTL
	I0318 21:34:47.295137       1 shared_informer.go:262] Caches are synced for node
	I0318 21:34:47.295191       1 range_allocator.go:173] Starting range CIDR allocator
	I0318 21:34:47.295214       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0318 21:34:47.295240       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0318 21:34:47.321623       1 shared_informer.go:262] Caches are synced for GC
	I0318 21:34:47.323005       1 shared_informer.go:262] Caches are synced for daemon sets
	I0318 21:34:47.338594       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 21:34:47.348116       1 shared_informer.go:262] Caches are synced for persistent volume
	I0318 21:34:47.352543       1 shared_informer.go:262] Caches are synced for attach detach
	I0318 21:34:47.368350       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0318 21:34:47.371907       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 21:34:47.808404       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 21:34:47.808425       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0318 21:34:47.836871       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [f223af532a75d9acc70ae5ef0f6dbecaeb7d2effe2cc883e7828f61b0dbc4ef8] <==
	I0318 21:34:36.806380       1 node.go:163] Successfully retrieved node IP: 192.168.39.10
	I0318 21:34:36.806453       1 server_others.go:138] "Detected node IP" address="192.168.39.10"
	I0318 21:34:36.806481       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0318 21:34:36.908544       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0318 21:34:36.908614       1 server_others.go:206] "Using iptables Proxier"
	I0318 21:34:36.909654       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0318 21:34:36.924076       1 server.go:661] "Version info" version="v1.24.4"
	I0318 21:34:36.924130       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:34:36.925300       1 config.go:317] "Starting service config controller"
	I0318 21:34:36.925548       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0318 21:34:36.925613       1 config.go:226] "Starting endpoint slice config controller"
	I0318 21:34:36.925632       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0318 21:34:36.926637       1 config.go:444] "Starting node config controller"
	I0318 21:34:36.926754       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0318 21:34:37.025998       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0318 21:34:37.026088       1 shared_informer.go:262] Caches are synced for service config
	I0318 21:34:37.027808       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [0b65cf687668e29d7b6d16be2cb726306395fd1e50bc3210abd42a86f5d36ae9] <==
	I0318 21:34:31.757318       1 serving.go:348] Generated self-signed cert in-memory
	W0318 21:34:34.692340       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 21:34:34.692424       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 21:34:34.692452       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 21:34:34.692480       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 21:34:34.725610       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0318 21:34:34.725657       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:34:34.729312       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0318 21:34:34.734402       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 21:34:34.734440       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:34:34.734470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 21:34:34.834794       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.035549    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbq26\" (UniqueName: \"kubernetes.io/projected/11c8970f-b4de-4ada-b8bc-7c446868d4db-kube-api-access-xbq26\") pod \"storage-provisioner\" (UID: \"11c8970f-b4de-4ada-b8bc-7c446868d4db\") " pod="kube-system/storage-provisioner"
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.035598    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a980559-403e-435c-8d08-2e0c92cdd4a8-kube-proxy\") pod \"kube-proxy-w9f2x\" (UID: \"7a980559-403e-435c-8d08-2e0c92cdd4a8\") " pod="kube-system/kube-proxy-w9f2x"
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.035655    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a980559-403e-435c-8d08-2e0c92cdd4a8-xtables-lock\") pod \"kube-proxy-w9f2x\" (UID: \"7a980559-403e-435c-8d08-2e0c92cdd4a8\") " pod="kube-system/kube-proxy-w9f2x"
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.035759    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume\") pod \"coredns-6d4b75cb6d-k6455\" (UID: \"9a3179ac-8035-417d-9eda-c62cfa856a51\") " pod="kube-system/coredns-6d4b75cb6d-k6455"
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.035791    1079 reconciler.go:159] "Reconciler: start to sync state"
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: E0318 21:34:35.046200    1079 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.483201    1079 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsg6p\" (UniqueName: \"kubernetes.io/projected/0faf63cd-2f69-4252-b153-4eb1e7070ff4-kube-api-access-bsg6p\") pod \"0faf63cd-2f69-4252-b153-4eb1e7070ff4\" (UID: \"0faf63cd-2f69-4252-b153-4eb1e7070ff4\") "
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.483804    1079 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0faf63cd-2f69-4252-b153-4eb1e7070ff4-config-volume\") pod \"0faf63cd-2f69-4252-b153-4eb1e7070ff4\" (UID: \"0faf63cd-2f69-4252-b153-4eb1e7070ff4\") "
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: W0318 21:34:35.485208    1079 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0faf63cd-2f69-4252-b153-4eb1e7070ff4/volumes/kubernetes.io~projected/kube-api-access-bsg6p: clearQuota called, but quotas disabled
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.485386    1079 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0faf63cd-2f69-4252-b153-4eb1e7070ff4-kube-api-access-bsg6p" (OuterVolumeSpecName: "kube-api-access-bsg6p") pod "0faf63cd-2f69-4252-b153-4eb1e7070ff4" (UID: "0faf63cd-2f69-4252-b153-4eb1e7070ff4"). InnerVolumeSpecName "kube-api-access-bsg6p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: W0318 21:34:35.485635    1079 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/0faf63cd-2f69-4252-b153-4eb1e7070ff4/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: E0318 21:34:35.485967    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: E0318 21:34:35.486131    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume podName:9a3179ac-8035-417d-9eda-c62cfa856a51 nodeName:}" failed. No retries permitted until 2024-03-18 21:34:35.986037767 +0000 UTC m=+6.150960678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume") pod "coredns-6d4b75cb6d-k6455" (UID: "9a3179ac-8035-417d-9eda-c62cfa856a51") : object "kube-system"/"coredns" not registered
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.486725    1079 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0faf63cd-2f69-4252-b153-4eb1e7070ff4-config-volume" (OuterVolumeSpecName: "config-volume") pod "0faf63cd-2f69-4252-b153-4eb1e7070ff4" (UID: "0faf63cd-2f69-4252-b153-4eb1e7070ff4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.585408    1079 reconciler.go:384] "Volume detached for volume \"kube-api-access-bsg6p\" (UniqueName: \"kubernetes.io/projected/0faf63cd-2f69-4252-b153-4eb1e7070ff4-kube-api-access-bsg6p\") on node \"test-preload-524155\" DevicePath \"\""
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: I0318 21:34:35.585470    1079 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0faf63cd-2f69-4252-b153-4eb1e7070ff4-config-volume\") on node \"test-preload-524155\" DevicePath \"\""
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: E0318 21:34:35.988266    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 21:34:35 test-preload-524155 kubelet[1079]: E0318 21:34:35.988334    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume podName:9a3179ac-8035-417d-9eda-c62cfa856a51 nodeName:}" failed. No retries permitted until 2024-03-18 21:34:36.98832127 +0000 UTC m=+7.153244193 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume") pod "coredns-6d4b75cb6d-k6455" (UID: "9a3179ac-8035-417d-9eda-c62cfa856a51") : object "kube-system"/"coredns" not registered
	Mar 18 21:34:36 test-preload-524155 kubelet[1079]: E0318 21:34:36.998443    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 21:34:36 test-preload-524155 kubelet[1079]: E0318 21:34:36.998937    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume podName:9a3179ac-8035-417d-9eda-c62cfa856a51 nodeName:}" failed. No retries permitted until 2024-03-18 21:34:38.998916256 +0000 UTC m=+9.163839179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume") pod "coredns-6d4b75cb6d-k6455" (UID: "9a3179ac-8035-417d-9eda-c62cfa856a51") : object "kube-system"/"coredns" not registered
	Mar 18 21:34:37 test-preload-524155 kubelet[1079]: E0318 21:34:37.091197    1079 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-k6455" podUID=9a3179ac-8035-417d-9eda-c62cfa856a51
	Mar 18 21:34:38 test-preload-524155 kubelet[1079]: I0318 21:34:38.103261    1079 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0faf63cd-2f69-4252-b153-4eb1e7070ff4 path="/var/lib/kubelet/pods/0faf63cd-2f69-4252-b153-4eb1e7070ff4/volumes"
	Mar 18 21:34:39 test-preload-524155 kubelet[1079]: E0318 21:34:39.015614    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 21:34:39 test-preload-524155 kubelet[1079]: E0318 21:34:39.015751    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume podName:9a3179ac-8035-417d-9eda-c62cfa856a51 nodeName:}" failed. No retries permitted until 2024-03-18 21:34:43.015731912 +0000 UTC m=+13.180654824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9a3179ac-8035-417d-9eda-c62cfa856a51-config-volume") pod "coredns-6d4b75cb6d-k6455" (UID: "9a3179ac-8035-417d-9eda-c62cfa856a51") : object "kube-system"/"coredns" not registered
	Mar 18 21:34:39 test-preload-524155 kubelet[1079]: E0318 21:34:39.090981    1079 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-k6455" podUID=9a3179ac-8035-417d-9eda-c62cfa856a51
	
	
	==> storage-provisioner [ae78b9cbc06ba3b6425b29e7d1abbbdd4c58c3f429ef4e3ce411db0bb18153bf] <==
	I0318 21:34:36.276150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-524155 -n test-preload-524155
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-524155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-524155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-524155
--- FAIL: TestPreload (250.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (422.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m0.893557029s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-397473] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-397473" primary control-plane node in "kubernetes-upgrade-397473" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:39:52.885419   47737 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:39:52.885531   47737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:39:52.885540   47737 out.go:304] Setting ErrFile to fd 2...
	I0318 21:39:52.885544   47737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:39:52.885731   47737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:39:52.886247   47737 out.go:298] Setting JSON to false
	I0318 21:39:52.887170   47737 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4937,"bootTime":1710793056,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:39:52.887225   47737 start.go:139] virtualization: kvm guest
	I0318 21:39:52.889318   47737 out.go:177] * [kubernetes-upgrade-397473] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:39:52.890933   47737 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:39:52.892100   47737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:39:52.890943   47737 notify.go:220] Checking for updates...
	I0318 21:39:52.894373   47737 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:39:52.895583   47737 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:39:52.896844   47737 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:39:52.898059   47737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:39:52.899734   47737 config.go:182] Loaded profile config "NoKubernetes-779999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0318 21:39:52.899823   47737 config.go:182] Loaded profile config "cert-expiration-443643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:39:52.899903   47737 config.go:182] Loaded profile config "running-upgrade-857338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0318 21:39:52.899975   47737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:39:52.934813   47737 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 21:39:52.936000   47737 start.go:297] selected driver: kvm2
	I0318 21:39:52.936018   47737 start.go:901] validating driver "kvm2" against <nil>
	I0318 21:39:52.936035   47737 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:39:52.937019   47737 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:39:52.937120   47737 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:39:52.950940   47737 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:39:52.950979   47737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 21:39:52.951192   47737 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 21:39:52.951251   47737 cni.go:84] Creating CNI manager for ""
	I0318 21:39:52.951268   47737 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:39:52.951278   47737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 21:39:52.951322   47737 start.go:340] cluster config:
	{Name:kubernetes-upgrade-397473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:39:52.951429   47737 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:39:52.953010   47737 out.go:177] * Starting "kubernetes-upgrade-397473" primary control-plane node in "kubernetes-upgrade-397473" cluster
	I0318 21:39:52.954211   47737 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:39:52.954241   47737 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 21:39:52.954253   47737 cache.go:56] Caching tarball of preloaded images
	I0318 21:39:52.954339   47737 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 21:39:52.954353   47737 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 21:39:52.954447   47737 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/config.json ...
	I0318 21:39:52.954465   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/config.json: {Name:mk641b50e278fc7f2a595e0c2c2253b4f622f032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:39:52.954605   47737 start.go:360] acquireMachinesLock for kubernetes-upgrade-397473: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:40:19.337628   47737 start.go:364] duration metric: took 26.382966004s to acquireMachinesLock for "kubernetes-upgrade-397473"
	I0318 21:40:19.337704   47737 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-397473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:40:19.337807   47737 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 21:40:19.339949   47737 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 21:40:19.340150   47737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:40:19.340181   47737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:40:19.359491   47737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0318 21:40:19.359926   47737 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:40:19.360469   47737 main.go:141] libmachine: Using API Version  1
	I0318 21:40:19.360494   47737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:40:19.360923   47737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:40:19.361132   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:40:19.361287   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:19.361431   47737 start.go:159] libmachine.API.Create for "kubernetes-upgrade-397473" (driver="kvm2")
	I0318 21:40:19.361471   47737 client.go:168] LocalClient.Create starting
	I0318 21:40:19.361509   47737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 21:40:19.361546   47737 main.go:141] libmachine: Decoding PEM data...
	I0318 21:40:19.361564   47737 main.go:141] libmachine: Parsing certificate...
	I0318 21:40:19.361642   47737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 21:40:19.361669   47737 main.go:141] libmachine: Decoding PEM data...
	I0318 21:40:19.361688   47737 main.go:141] libmachine: Parsing certificate...
	I0318 21:40:19.361723   47737 main.go:141] libmachine: Running pre-create checks...
	I0318 21:40:19.361734   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .PreCreateCheck
	I0318 21:40:19.362065   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetConfigRaw
	I0318 21:40:19.362447   47737 main.go:141] libmachine: Creating machine...
	I0318 21:40:19.362459   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .Create
	I0318 21:40:19.362593   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Creating KVM machine...
	I0318 21:40:19.363689   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found existing default KVM network
	I0318 21:40:19.365359   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:19.365205   48231 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000157d0}
	I0318 21:40:19.365383   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | created network xml: 
	I0318 21:40:19.365396   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | <network>
	I0318 21:40:19.365405   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |   <name>mk-kubernetes-upgrade-397473</name>
	I0318 21:40:19.365420   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |   <dns enable='no'/>
	I0318 21:40:19.365434   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |   
	I0318 21:40:19.365445   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 21:40:19.365456   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |     <dhcp>
	I0318 21:40:19.365471   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 21:40:19.365482   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |     </dhcp>
	I0318 21:40:19.365492   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |   </ip>
	I0318 21:40:19.365503   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG |   
	I0318 21:40:19.365523   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | </network>
	I0318 21:40:19.365537   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | 
	I0318 21:40:19.370626   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | trying to create private KVM network mk-kubernetes-upgrade-397473 192.168.39.0/24...
	I0318 21:40:19.442334   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | private KVM network mk-kubernetes-upgrade-397473 192.168.39.0/24 created
	I0318 21:40:19.442363   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473 ...
	I0318 21:40:19.442386   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:19.442326   48231 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:40:19.442406   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 21:40:19.442464   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 21:40:19.658956   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:19.658845   48231 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa...
	I0318 21:40:20.185435   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:20.185299   48231 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/kubernetes-upgrade-397473.rawdisk...
	I0318 21:40:20.185464   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Writing magic tar header
	I0318 21:40:20.185477   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Writing SSH key tar header
	I0318 21:40:20.185487   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:20.185421   48231 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473 ...
	I0318 21:40:20.185572   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473
	I0318 21:40:20.185601   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 21:40:20.185620   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473 (perms=drwx------)
	I0318 21:40:20.185632   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:40:20.185657   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 21:40:20.185673   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 21:40:20.185688   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 21:40:20.185700   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home/jenkins
	I0318 21:40:20.185717   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Checking permissions on dir: /home
	I0318 21:40:20.185736   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Skipping /home - not owner
	I0318 21:40:20.185767   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 21:40:20.185782   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 21:40:20.185801   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 21:40:20.185814   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 21:40:20.185825   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Creating domain...
	I0318 21:40:20.186858   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) define libvirt domain using xml: 
	I0318 21:40:20.186885   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) <domain type='kvm'>
	I0318 21:40:20.186893   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <name>kubernetes-upgrade-397473</name>
	I0318 21:40:20.186898   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <memory unit='MiB'>2200</memory>
	I0318 21:40:20.186903   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <vcpu>2</vcpu>
	I0318 21:40:20.186908   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <features>
	I0318 21:40:20.186913   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <acpi/>
	I0318 21:40:20.186917   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <apic/>
	I0318 21:40:20.186922   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <pae/>
	I0318 21:40:20.186936   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     
	I0318 21:40:20.186945   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   </features>
	I0318 21:40:20.186950   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <cpu mode='host-passthrough'>
	I0318 21:40:20.186956   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   
	I0318 21:40:20.186963   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   </cpu>
	I0318 21:40:20.186968   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <os>
	I0318 21:40:20.186979   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <type>hvm</type>
	I0318 21:40:20.187002   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <boot dev='cdrom'/>
	I0318 21:40:20.187021   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <boot dev='hd'/>
	I0318 21:40:20.187048   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <bootmenu enable='no'/>
	I0318 21:40:20.187070   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   </os>
	I0318 21:40:20.187081   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   <devices>
	I0318 21:40:20.187094   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <disk type='file' device='cdrom'>
	I0318 21:40:20.187112   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/boot2docker.iso'/>
	I0318 21:40:20.187122   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <target dev='hdc' bus='scsi'/>
	I0318 21:40:20.187138   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <readonly/>
	I0318 21:40:20.187153   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </disk>
	I0318 21:40:20.187168   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <disk type='file' device='disk'>
	I0318 21:40:20.187182   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 21:40:20.187200   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/kubernetes-upgrade-397473.rawdisk'/>
	I0318 21:40:20.187224   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <target dev='hda' bus='virtio'/>
	I0318 21:40:20.187235   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </disk>
	I0318 21:40:20.187246   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <interface type='network'>
	I0318 21:40:20.187260   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <source network='mk-kubernetes-upgrade-397473'/>
	I0318 21:40:20.187268   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <model type='virtio'/>
	I0318 21:40:20.187287   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </interface>
	I0318 21:40:20.187296   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <interface type='network'>
	I0318 21:40:20.187311   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <source network='default'/>
	I0318 21:40:20.187328   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <model type='virtio'/>
	I0318 21:40:20.187336   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </interface>
	I0318 21:40:20.187342   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <serial type='pty'>
	I0318 21:40:20.187352   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <target port='0'/>
	I0318 21:40:20.187361   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </serial>
	I0318 21:40:20.187399   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <console type='pty'>
	I0318 21:40:20.187419   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <target type='serial' port='0'/>
	I0318 21:40:20.187431   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </console>
	I0318 21:40:20.187442   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     <rng model='virtio'>
	I0318 21:40:20.187456   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)       <backend model='random'>/dev/random</backend>
	I0318 21:40:20.187467   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     </rng>
	I0318 21:40:20.187478   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     
	I0318 21:40:20.187489   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)     
	I0318 21:40:20.187499   47737 main.go:141] libmachine: (kubernetes-upgrade-397473)   </devices>
	I0318 21:40:20.187515   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) </domain>
	I0318 21:40:20.187530   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) 
	I0318 21:40:20.191740   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:23:cd:3b in network default
	I0318 21:40:20.192349   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Ensuring networks are active...
	I0318 21:40:20.192370   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:20.193100   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Ensuring network default is active
	I0318 21:40:20.193398   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Ensuring network mk-kubernetes-upgrade-397473 is active
	I0318 21:40:20.193981   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Getting domain xml...
	I0318 21:40:20.194739   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Creating domain...
	I0318 21:40:21.368722   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Waiting to get IP...
	I0318 21:40:21.369364   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:21.369708   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:21.369801   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:21.369722   48231 retry.go:31] will retry after 219.174359ms: waiting for machine to come up
	I0318 21:40:21.590120   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:21.590710   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:21.590760   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:21.590655   48231 retry.go:31] will retry after 360.344504ms: waiting for machine to come up
	I0318 21:40:21.952155   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:21.952624   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:21.952653   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:21.952559   48231 retry.go:31] will retry after 485.343594ms: waiting for machine to come up
	I0318 21:40:22.439246   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:22.439653   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:22.439681   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:22.439609   48231 retry.go:31] will retry after 512.049343ms: waiting for machine to come up
	I0318 21:40:22.953253   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:22.953682   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:22.953711   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:22.953634   48231 retry.go:31] will retry after 726.6908ms: waiting for machine to come up
	I0318 21:40:23.681607   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:23.682073   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:23.682102   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:23.682040   48231 retry.go:31] will retry after 941.531048ms: waiting for machine to come up
	I0318 21:40:24.625342   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:24.625841   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:24.625874   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:24.625789   48231 retry.go:31] will retry after 797.448345ms: waiting for machine to come up
	I0318 21:40:25.424521   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:25.425037   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:25.425068   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:25.425014   48231 retry.go:31] will retry after 1.248264365s: waiting for machine to come up
	I0318 21:40:26.675536   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:26.676005   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:26.676032   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:26.675970   48231 retry.go:31] will retry after 1.626761907s: waiting for machine to come up
	I0318 21:40:28.304748   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:28.305329   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:28.305382   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:28.305281   48231 retry.go:31] will retry after 1.421034831s: waiting for machine to come up
	I0318 21:40:29.728279   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:29.728877   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:29.728921   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:29.728817   48231 retry.go:31] will retry after 2.013491496s: waiting for machine to come up
	I0318 21:40:31.743591   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:31.744107   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:31.744138   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:31.744049   48231 retry.go:31] will retry after 3.225132477s: waiting for machine to come up
	I0318 21:40:34.970977   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:34.971417   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:34.971448   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:34.971369   48231 retry.go:31] will retry after 3.086122993s: waiting for machine to come up
	I0318 21:40:38.058721   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:38.059165   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find current IP address of domain kubernetes-upgrade-397473 in network mk-kubernetes-upgrade-397473
	I0318 21:40:38.059197   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | I0318 21:40:38.059103   48231 retry.go:31] will retry after 5.089677475s: waiting for machine to come up
	I0318 21:40:43.152443   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.152856   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Found IP for machine: 192.168.39.139
	I0318 21:40:43.152872   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Reserving static IP address...
	I0318 21:40:43.152883   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has current primary IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.153264   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-397473", mac: "52:54:00:5f:8a:7e", ip: "192.168.39.139"} in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.224967   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Getting to WaitForSSH function...
	I0318 21:40:43.224993   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Reserved static IP address: 192.168.39.139
	I0318 21:40:43.225010   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Waiting for SSH to be available...
	I0318 21:40:43.227507   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.227974   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.228006   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.228225   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Using SSH client type: external
	I0318 21:40:43.228254   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa (-rw-------)
	I0318 21:40:43.228314   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:40:43.228339   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | About to run SSH command:
	I0318 21:40:43.228356   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | exit 0
	I0318 21:40:43.352745   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | SSH cmd err, output: <nil>: 
	I0318 21:40:43.353053   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) KVM machine creation complete!
	I0318 21:40:43.353346   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetConfigRaw
	I0318 21:40:43.353993   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:43.354216   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:43.354360   47737 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 21:40:43.354379   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetState
	I0318 21:40:43.355665   47737 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 21:40:43.355677   47737 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 21:40:43.355683   47737 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 21:40:43.355689   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:43.357904   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.358244   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.358269   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.358405   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:43.358574   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.358682   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.358822   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:43.358974   47737 main.go:141] libmachine: Using SSH client type: native
	I0318 21:40:43.359186   47737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:40:43.359203   47737 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 21:40:43.464274   47737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:40:43.464295   47737 main.go:141] libmachine: Detecting the provisioner...
	I0318 21:40:43.464303   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:43.466863   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.467177   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.467208   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.467349   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:43.467553   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.467691   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.467812   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:43.467980   47737 main.go:141] libmachine: Using SSH client type: native
	I0318 21:40:43.468212   47737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:40:43.468227   47737 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 21:40:43.579565   47737 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 21:40:43.579668   47737 main.go:141] libmachine: found compatible host: buildroot
	I0318 21:40:43.579685   47737 main.go:141] libmachine: Provisioning with buildroot...
	I0318 21:40:43.579696   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:40:43.579955   47737 buildroot.go:166] provisioning hostname "kubernetes-upgrade-397473"
	I0318 21:40:43.579983   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:40:43.580143   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:43.582689   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.583073   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.583106   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.583269   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:43.583473   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.583641   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.583795   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:43.583972   47737 main.go:141] libmachine: Using SSH client type: native
	I0318 21:40:43.584211   47737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:40:43.584232   47737 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-397473 && echo "kubernetes-upgrade-397473" | sudo tee /etc/hostname
	I0318 21:40:43.716162   47737 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-397473
	
	I0318 21:40:43.716193   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:43.719728   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.720218   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.720249   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.720420   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:43.720611   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.720787   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.720998   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:43.721211   47737 main.go:141] libmachine: Using SSH client type: native
	I0318 21:40:43.721475   47737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:40:43.721507   47737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-397473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-397473/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-397473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:40:43.840599   47737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:40:43.840623   47737 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:40:43.840662   47737 buildroot.go:174] setting up certificates
	I0318 21:40:43.840674   47737 provision.go:84] configureAuth start
	I0318 21:40:43.840692   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:40:43.841016   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:40:43.844061   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.844530   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.844561   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.844720   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:43.847266   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.847609   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.847637   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.847777   47737 provision.go:143] copyHostCerts
	I0318 21:40:43.847853   47737 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:40:43.847863   47737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:40:43.847908   47737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:40:43.848002   47737 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:40:43.848011   47737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:40:43.848033   47737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:40:43.848099   47737 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:40:43.848106   47737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:40:43.848125   47737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:40:43.848180   47737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-397473 san=[127.0.0.1 192.168.39.139 kubernetes-upgrade-397473 localhost minikube]
	I0318 21:40:43.912436   47737 provision.go:177] copyRemoteCerts
	I0318 21:40:43.912490   47737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:40:43.912511   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:43.915227   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.915577   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:43.915606   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:43.915734   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:43.915941   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:43.916097   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:43.916242   47737 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:40:44.000544   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:40:44.033907   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0318 21:40:44.063930   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:40:44.095787   47737 provision.go:87] duration metric: took 255.095766ms to configureAuth
	I0318 21:40:44.095817   47737 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:40:44.095989   47737 config.go:182] Loaded profile config "kubernetes-upgrade-397473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:40:44.096053   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:44.099259   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.099681   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.099711   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.099919   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:44.100129   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.100320   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.100523   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:44.100718   47737 main.go:141] libmachine: Using SSH client type: native
	I0318 21:40:44.100939   47737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:40:44.100960   47737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:40:44.390402   47737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:40:44.390427   47737 main.go:141] libmachine: Checking connection to Docker...
	I0318 21:40:44.390438   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetURL
	I0318 21:40:44.391762   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | Using libvirt version 6000000
	I0318 21:40:44.393911   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.394216   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.394241   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.394391   47737 main.go:141] libmachine: Docker is up and running!
	I0318 21:40:44.394409   47737 main.go:141] libmachine: Reticulating splines...
	I0318 21:40:44.394416   47737 client.go:171] duration metric: took 25.032934559s to LocalClient.Create
	I0318 21:40:44.394440   47737 start.go:167] duration metric: took 25.033010261s to libmachine.API.Create "kubernetes-upgrade-397473"
	I0318 21:40:44.394449   47737 start.go:293] postStartSetup for "kubernetes-upgrade-397473" (driver="kvm2")
	I0318 21:40:44.394458   47737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:40:44.394484   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:44.394721   47737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:40:44.394747   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:44.396870   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.397192   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.397222   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.397342   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:44.397510   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.397680   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:44.397829   47737 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:40:44.479482   47737 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:40:44.484657   47737 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:40:44.484681   47737 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:40:44.484746   47737 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:40:44.484849   47737 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:40:44.484982   47737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:40:44.494674   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:40:44.522105   47737 start.go:296] duration metric: took 127.647641ms for postStartSetup
	I0318 21:40:44.522147   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetConfigRaw
	I0318 21:40:44.522702   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:40:44.525288   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.525699   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.525726   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.525915   47737 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/config.json ...
	I0318 21:40:44.526106   47737 start.go:128] duration metric: took 25.188282883s to createHost
	I0318 21:40:44.526131   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:44.528611   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.528953   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.528986   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.529145   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:44.529361   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.529636   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.529823   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:44.529996   47737 main.go:141] libmachine: Using SSH client type: native
	I0318 21:40:44.530177   47737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:40:44.530197   47737 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 21:40:44.638151   47737 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710798044.615449213
	
	I0318 21:40:44.638170   47737 fix.go:216] guest clock: 1710798044.615449213
	I0318 21:40:44.638177   47737 fix.go:229] Guest: 2024-03-18 21:40:44.615449213 +0000 UTC Remote: 2024-03-18 21:40:44.526118007 +0000 UTC m=+51.685251999 (delta=89.331206ms)
	I0318 21:40:44.638214   47737 fix.go:200] guest clock delta is within tolerance: 89.331206ms
	I0318 21:40:44.638224   47737 start.go:83] releasing machines lock for "kubernetes-upgrade-397473", held for 25.30054491s
	I0318 21:40:44.638253   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:44.638563   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:40:44.641402   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.641789   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.641827   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.641986   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:44.642531   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:44.642703   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:40:44.642779   47737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:40:44.642819   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:44.642939   47737 ssh_runner.go:195] Run: cat /version.json
	I0318 21:40:44.642964   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:40:44.645508   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.645690   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.645858   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.645882   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.645997   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:44.646017   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:44.646051   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:44.646206   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.646223   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:40:44.646389   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:40:44.646403   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:44.646551   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:40:44.646594   47737 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:40:44.646689   47737 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:40:44.732142   47737 ssh_runner.go:195] Run: systemctl --version
	I0318 21:40:44.755435   47737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:40:44.934147   47737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:40:44.941331   47737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:40:44.941408   47737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:40:44.962823   47737 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:40:44.962849   47737 start.go:494] detecting cgroup driver to use...
	I0318 21:40:44.962901   47737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:40:44.984546   47737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:40:45.000769   47737 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:40:45.000834   47737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:40:45.021384   47737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:40:45.038144   47737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:40:45.188745   47737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:40:45.371822   47737 docker.go:233] disabling docker service ...
	I0318 21:40:45.371880   47737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:40:45.391713   47737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:40:45.410073   47737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:40:45.546407   47737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:40:45.713328   47737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:40:45.738729   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:40:45.765826   47737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:40:45.765897   47737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:40:45.782067   47737 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:40:45.782138   47737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:40:45.796400   47737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:40:45.808923   47737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:40:45.820639   47737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:40:45.833466   47737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:40:45.844637   47737 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:40:45.844691   47737 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:40:45.860580   47737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:40:45.872071   47737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:40:46.053562   47737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:40:46.241995   47737 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:40:46.242074   47737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:40:46.249583   47737 start.go:562] Will wait 60s for crictl version
	I0318 21:40:46.249645   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:46.255173   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:40:46.314462   47737 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:40:46.314539   47737 ssh_runner.go:195] Run: crio --version
	I0318 21:40:46.358200   47737 ssh_runner.go:195] Run: crio --version
	I0318 21:40:46.406369   47737 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:40:46.408047   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:40:46.411400   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:46.411885   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:40:36 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:40:46.411910   47737 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:40:46.412181   47737 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:40:46.417547   47737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:40:46.435361   47737 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-397473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:40:46.435487   47737 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:40:46.435531   47737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:40:46.480794   47737 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:40:46.480866   47737 ssh_runner.go:195] Run: which lz4
	I0318 21:40:46.486617   47737 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 21:40:46.491829   47737 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:40:46.491860   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:40:48.706965   47737 crio.go:462] duration metric: took 2.220382956s to copy over tarball
	I0318 21:40:48.707042   47737 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:40:51.830247   47737 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.123180092s)
	I0318 21:40:51.830279   47737 crio.go:469] duration metric: took 3.123284932s to extract the tarball
	I0318 21:40:51.830287   47737 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:40:51.873714   47737 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:40:51.928558   47737 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:40:51.928588   47737 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:40:51.928661   47737 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:40:51.928695   47737 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:40:51.928723   47737 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:40:51.928717   47737 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:40:51.928740   47737 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:40:51.928687   47737 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:40:51.928666   47737 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:40:51.928661   47737 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:40:51.930101   47737 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:40:51.930213   47737 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:40:51.930265   47737 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:40:51.930103   47737 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:40:51.930389   47737 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:40:51.930389   47737 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:40:51.930478   47737 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:40:51.930495   47737 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:40:52.093512   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:40:52.106988   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:40:52.137293   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:40:52.140252   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:40:52.145420   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:40:52.155142   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:40:52.163904   47737 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:40:52.163946   47737 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:40:52.163998   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.170904   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:40:52.253039   47737 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:40:52.253092   47737 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:40:52.253145   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.301234   47737 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:40:52.301266   47737 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:40:52.301305   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.332919   47737 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:40:52.332970   47737 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:40:52.333001   47737 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:40:52.333024   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.333034   47737 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:40:52.333072   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.333102   47737 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:40:52.333131   47737 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:40:52.333144   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:40:52.333156   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.339170   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:40:52.339209   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:40:52.339271   47737 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:40:52.339298   47737 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:40:52.339327   47737 ssh_runner.go:195] Run: which crictl
	I0318 21:40:52.354092   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:40:52.354190   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:40:52.354121   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:40:52.459559   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:40:52.492732   47737 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:40:52.492819   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:40:52.492853   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:40:52.524706   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:40:52.524721   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:40:52.524788   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:40:52.564187   47737 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:40:52.865051   47737 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:40:53.020440   47737 cache_images.go:92] duration metric: took 1.091831868s to LoadCachedImages
	W0318 21:40:53.020530   47737 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 21:40:53.020545   47737 kubeadm.go:928] updating node { 192.168.39.139 8443 v1.20.0 crio true true} ...
	I0318 21:40:53.020672   47737 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-397473 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:40:53.020758   47737 ssh_runner.go:195] Run: crio config
	I0318 21:40:53.072290   47737 cni.go:84] Creating CNI manager for ""
	I0318 21:40:53.072317   47737 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:40:53.072329   47737 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:40:53.072347   47737 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-397473 NodeName:kubernetes-upgrade-397473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:40:53.072553   47737 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-397473"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:40:53.072627   47737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:40:53.084171   47737 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:40:53.084240   47737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:40:53.095554   47737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0318 21:40:53.118594   47737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:40:53.138782   47737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0318 21:40:53.159621   47737 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0318 21:40:53.164698   47737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:40:53.178849   47737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:40:53.331983   47737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:40:53.351661   47737 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473 for IP: 192.168.39.139
	I0318 21:40:53.351697   47737 certs.go:194] generating shared ca certs ...
	I0318 21:40:53.351726   47737 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:53.351940   47737 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:40:53.352001   47737 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:40:53.352018   47737 certs.go:256] generating profile certs ...
	I0318 21:40:53.352091   47737 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.key
	I0318 21:40:53.352111   47737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.crt with IP's: []
	I0318 21:40:53.704235   47737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.crt ...
	I0318 21:40:53.704266   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.crt: {Name:mk1fb006c562da1dd4db12590b49b6cfe3ff55c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:53.704451   47737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.key ...
	I0318 21:40:53.704473   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.key: {Name:mk6c0c307cdcd96418190a38684c7aa20fd47c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:53.704576   47737 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key.218bb9d1
	I0318 21:40:53.704594   47737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt.218bb9d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.139]
	I0318 21:40:53.862769   47737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt.218bb9d1 ...
	I0318 21:40:53.862801   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt.218bb9d1: {Name:mkb451ada2e5e458c4ebb8cc57bd7a4b0658b0c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:53.863013   47737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key.218bb9d1 ...
	I0318 21:40:53.863032   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key.218bb9d1: {Name:mkafa2f1e6fd1589ba560478619a52517c59ded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:53.863120   47737 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt.218bb9d1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt
	I0318 21:40:53.863202   47737 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key.218bb9d1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key
	I0318 21:40:53.863257   47737 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.key
	I0318 21:40:53.863277   47737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.crt with IP's: []
	I0318 21:40:54.012569   47737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.crt ...
	I0318 21:40:54.012602   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.crt: {Name:mkab61be575196fd107dd80a387d73873f933b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:54.012774   47737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.key ...
	I0318 21:40:54.012790   47737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.key: {Name:mkf420aa6f7d9e2012063b9d2b9b69c63986daf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:40:54.013003   47737 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:40:54.013043   47737 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:40:54.013054   47737 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:40:54.013077   47737 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:40:54.013100   47737 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:40:54.013120   47737 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:40:54.013155   47737 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:40:54.013712   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:40:54.045011   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:40:54.086586   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:40:54.113995   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:40:54.146685   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 21:40:54.178046   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:40:54.213986   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:40:54.254959   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:40:54.296192   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:40:54.329481   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:40:54.355678   47737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:40:54.380731   47737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:40:54.398629   47737 ssh_runner.go:195] Run: openssl version
	I0318 21:40:54.405291   47737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:40:54.418429   47737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:40:54.423573   47737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:40:54.423625   47737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:40:54.429977   47737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:40:54.444676   47737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:40:54.465975   47737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:40:54.471126   47737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:40:54.471173   47737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:40:54.478794   47737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:40:54.494748   47737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:40:54.509342   47737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:40:54.514651   47737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:40:54.514719   47737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:40:54.521415   47737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:40:54.534319   47737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:40:54.539074   47737 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 21:40:54.539132   47737 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-397473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:40:54.539223   47737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:40:54.539277   47737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:40:54.580214   47737 cri.go:89] found id: ""
	I0318 21:40:54.580291   47737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 21:40:54.591550   47737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:40:54.602696   47737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:40:54.614179   47737 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:40:54.614201   47737 kubeadm.go:156] found existing configuration files:
	
	I0318 21:40:54.614257   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:40:54.624451   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:40:54.624507   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:40:54.635265   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:40:54.645298   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:40:54.645351   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:40:54.655729   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:40:54.666700   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:40:54.666752   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:40:54.677595   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:40:54.690789   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:40:54.690831   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:40:54.704337   47737 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 21:40:55.024022   47737 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 21:42:53.718039   47737 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 21:42:53.718154   47737 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 21:42:53.719575   47737 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 21:42:53.719644   47737 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 21:42:53.719728   47737 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 21:42:53.719847   47737 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 21:42:53.719984   47737 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 21:42:53.720043   47737 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 21:42:53.721852   47737 out.go:204]   - Generating certificates and keys ...
	I0318 21:42:53.721934   47737 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 21:42:53.722006   47737 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 21:42:53.722068   47737 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 21:42:53.722115   47737 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 21:42:53.722166   47737 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 21:42:53.722208   47737 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 21:42:53.722253   47737 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 21:42:53.722359   47737 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-397473 localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
	I0318 21:42:53.722403   47737 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 21:42:53.722518   47737 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-397473 localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
	I0318 21:42:53.722574   47737 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 21:42:53.722627   47737 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 21:42:53.722665   47737 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 21:42:53.722711   47737 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 21:42:53.722754   47737 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 21:42:53.722801   47737 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 21:42:53.722861   47737 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 21:42:53.722909   47737 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 21:42:53.722993   47737 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 21:42:53.723063   47737 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 21:42:53.723115   47737 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 21:42:53.723189   47737 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 21:42:53.724577   47737 out.go:204]   - Booting up control plane ...
	I0318 21:42:53.724672   47737 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 21:42:53.724768   47737 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 21:42:53.724872   47737 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 21:42:53.725008   47737 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 21:42:53.725194   47737 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 21:42:53.725256   47737 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 21:42:53.725347   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:42:53.725570   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:42:53.725636   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:42:53.725887   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:42:53.725961   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:42:53.726153   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:42:53.726233   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:42:53.726418   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:42:53.726513   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:42:53.726672   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:42:53.726682   47737 kubeadm.go:309] 
	I0318 21:42:53.726739   47737 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 21:42:53.726817   47737 kubeadm.go:309] 		timed out waiting for the condition
	I0318 21:42:53.726827   47737 kubeadm.go:309] 
	I0318 21:42:53.726898   47737 kubeadm.go:309] 	This error is likely caused by:
	I0318 21:42:53.726947   47737 kubeadm.go:309] 		- The kubelet is not running
	I0318 21:42:53.727064   47737 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 21:42:53.727074   47737 kubeadm.go:309] 
	I0318 21:42:53.727199   47737 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 21:42:53.727254   47737 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 21:42:53.727307   47737 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 21:42:53.727316   47737 kubeadm.go:309] 
	I0318 21:42:53.727469   47737 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 21:42:53.727598   47737 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 21:42:53.727613   47737 kubeadm.go:309] 
	I0318 21:42:53.727763   47737 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 21:42:53.727843   47737 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 21:42:53.727912   47737 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 21:42:53.727991   47737 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 21:42:53.728018   47737 kubeadm.go:309] 
	W0318 21:42:53.728113   47737 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-397473 localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-397473 localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-397473 localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-397473 localhost] and IPs [192.168.39.139 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 21:42:53.728161   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 21:42:56.333751   47737 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.605525112s)
	I0318 21:42:56.333855   47737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:42:56.350399   47737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:42:56.361582   47737 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:42:56.361597   47737 kubeadm.go:156] found existing configuration files:
	
	I0318 21:42:56.361632   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:42:56.372965   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:42:56.373019   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:42:56.384118   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:42:56.394790   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:42:56.394850   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:42:56.405880   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:42:56.416394   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:42:56.416445   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:42:56.427288   47737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:42:56.437692   47737 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:42:56.437730   47737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:42:56.448306   47737 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 21:42:56.675875   47737 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 21:44:52.965864   47737 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 21:44:52.965994   47737 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 21:44:52.967616   47737 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 21:44:52.967680   47737 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 21:44:52.967780   47737 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 21:44:52.967904   47737 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 21:44:52.968047   47737 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 21:44:52.968144   47737 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 21:44:52.970609   47737 out.go:204]   - Generating certificates and keys ...
	I0318 21:44:52.970688   47737 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 21:44:52.970757   47737 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 21:44:52.970867   47737 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 21:44:52.970940   47737 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 21:44:52.971019   47737 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 21:44:52.971084   47737 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 21:44:52.971145   47737 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 21:44:52.971194   47737 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 21:44:52.971298   47737 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 21:44:52.971408   47737 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 21:44:52.971468   47737 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 21:44:52.971553   47737 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 21:44:52.971598   47737 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 21:44:52.971640   47737 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 21:44:52.971690   47737 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 21:44:52.971748   47737 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 21:44:52.971834   47737 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 21:44:52.971918   47737 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 21:44:52.971978   47737 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 21:44:52.972085   47737 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 21:44:52.973240   47737 out.go:204]   - Booting up control plane ...
	I0318 21:44:52.973341   47737 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 21:44:52.973431   47737 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 21:44:52.973519   47737 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 21:44:52.973634   47737 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 21:44:52.973768   47737 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 21:44:52.973811   47737 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 21:44:52.973868   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:44:52.974021   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:44:52.974078   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:44:52.974264   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:44:52.974360   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:44:52.974619   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:44:52.974703   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:44:52.974896   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:44:52.974977   47737 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:44:52.975187   47737 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:44:52.975200   47737 kubeadm.go:309] 
	I0318 21:44:52.975256   47737 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 21:44:52.975315   47737 kubeadm.go:309] 		timed out waiting for the condition
	I0318 21:44:52.975324   47737 kubeadm.go:309] 
	I0318 21:44:52.975378   47737 kubeadm.go:309] 	This error is likely caused by:
	I0318 21:44:52.975429   47737 kubeadm.go:309] 		- The kubelet is not running
	I0318 21:44:52.975532   47737 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 21:44:52.975547   47737 kubeadm.go:309] 
	I0318 21:44:52.975662   47737 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 21:44:52.975724   47737 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 21:44:52.975779   47737 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 21:44:52.975790   47737 kubeadm.go:309] 
	I0318 21:44:52.975916   47737 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 21:44:52.976038   47737 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 21:44:52.976056   47737 kubeadm.go:309] 
	I0318 21:44:52.976164   47737 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 21:44:52.976239   47737 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 21:44:52.976303   47737 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 21:44:52.976378   47737 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 21:44:52.976432   47737 kubeadm.go:309] 
	I0318 21:44:52.976471   47737 kubeadm.go:393] duration metric: took 3m58.437338011s to StartCluster
	I0318 21:44:52.976507   47737 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:44:52.976554   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:44:53.022405   47737 cri.go:89] found id: ""
	I0318 21:44:53.022427   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.022435   47737 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:44:53.022440   47737 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:44:53.022490   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:44:53.068753   47737 cri.go:89] found id: ""
	I0318 21:44:53.068778   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.068788   47737 logs.go:278] No container was found matching "etcd"
	I0318 21:44:53.068795   47737 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:44:53.068849   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:44:53.127946   47737 cri.go:89] found id: ""
	I0318 21:44:53.127970   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.127979   47737 logs.go:278] No container was found matching "coredns"
	I0318 21:44:53.127987   47737 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:44:53.128033   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:44:53.170956   47737 cri.go:89] found id: ""
	I0318 21:44:53.170978   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.170986   47737 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:44:53.170993   47737 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:44:53.171045   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:44:53.219947   47737 cri.go:89] found id: ""
	I0318 21:44:53.219982   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.219993   47737 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:44:53.220001   47737 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:44:53.220059   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:44:53.272983   47737 cri.go:89] found id: ""
	I0318 21:44:53.273015   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.273027   47737 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:44:53.273035   47737 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:44:53.273092   47737 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:44:53.316156   47737 cri.go:89] found id: ""
	I0318 21:44:53.316187   47737 logs.go:276] 0 containers: []
	W0318 21:44:53.316198   47737 logs.go:278] No container was found matching "kindnet"
	I0318 21:44:53.316209   47737 logs.go:123] Gathering logs for kubelet ...
	I0318 21:44:53.316224   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:44:53.381680   47737 logs.go:123] Gathering logs for dmesg ...
	I0318 21:44:53.381706   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:44:53.397803   47737 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:44:53.397829   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 21:44:53.555782   47737 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 21:44:53.555804   47737 logs.go:123] Gathering logs for CRI-O ...
	I0318 21:44:53.555815   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 21:44:53.668303   47737 logs.go:123] Gathering logs for container status ...
	I0318 21:44:53.668342   47737 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 21:44:53.716646   47737 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 21:44:53.716691   47737 out.go:239] * 
	* 
	W0318 21:44:53.716749   47737 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 21:44:53.716780   47737 out.go:239] * 
	* 
	W0318 21:44:53.717784   47737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 21:44:53.721281   47737 out.go:177] 
	W0318 21:44:53.722554   47737 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 21:44:53.722609   47737 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 21:44:53.722634   47737 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 21:44:53.724123   47737 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-397473
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-397473: (1.669115309s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-397473 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-397473 status --format={{.Host}}: exit status 7 (79.234876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.811951142s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-397473 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.913914ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-397473] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-397473
	    minikube start -p kubernetes-upgrade-397473 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3974732 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-397473 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-397473 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.345188709s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-18 21:46:50.846918 +0000 UTC m=+4667.748706456
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-397473 -n kubernetes-upgrade-397473
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-397473 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-397473 logs -n 25: (2.076176786s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo docker                        | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo cat                           | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo                               | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo find                          | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-389288 sudo crio                          | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-389288                                    | kindnet-389288            | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC | 18 Mar 24 21:46 UTC |
	| start   | -p enable-default-cni-389288                         | enable-default-cni-389288 | jenkins | v1.32.0 | 18 Mar 24 21:46 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:46:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:46:12.491869   54622 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:46:12.492010   54622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:46:12.492020   54622 out.go:304] Setting ErrFile to fd 2...
	I0318 21:46:12.492024   54622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:46:12.492230   54622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:46:12.492888   54622 out.go:298] Setting JSON to false
	I0318 21:46:12.494074   54622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5316,"bootTime":1710793056,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:46:12.494139   54622 start.go:139] virtualization: kvm guest
	I0318 21:46:12.496351   54622 out.go:177] * [enable-default-cni-389288] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:46:12.497987   54622 notify.go:220] Checking for updates...
	I0318 21:46:12.497992   54622 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:46:12.499280   54622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:46:12.500556   54622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:46:12.501994   54622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:46:12.503255   54622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:46:12.505069   54622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:46:12.506739   54622 config.go:182] Loaded profile config "calico-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:46:12.506859   54622 config.go:182] Loaded profile config "custom-flannel-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:46:12.506960   54622 config.go:182] Loaded profile config "kubernetes-upgrade-397473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:46:12.507072   54622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:46:12.543642   54622 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 21:46:12.544821   54622 start.go:297] selected driver: kvm2
	I0318 21:46:12.544838   54622 start.go:901] validating driver "kvm2" against <nil>
	I0318 21:46:12.544872   54622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:46:12.545569   54622 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:46:12.545645   54622 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:46:12.559931   54622 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:46:12.559966   54622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0318 21:46:12.560192   54622 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0318 21:46:12.560225   54622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:46:12.560301   54622 cni.go:84] Creating CNI manager for "bridge"
	I0318 21:46:12.560318   54622 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 21:46:12.560390   54622 start.go:340] cluster config:
	{Name:enable-default-cni-389288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-389288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:46:12.560493   54622 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:46:12.562037   54622 out.go:177] * Starting "enable-default-cni-389288" primary control-plane node in "enable-default-cni-389288" cluster
	I0318 21:46:09.402069   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:09.402619   52561 main.go:141] libmachine: (custom-flannel-389288) Found IP for machine: 192.168.61.187
	I0318 21:46:09.402651   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has current primary IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:09.402661   52561 main.go:141] libmachine: (custom-flannel-389288) Reserving static IP address...
	I0318 21:46:09.402978   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | unable to find host DHCP lease matching {name: "custom-flannel-389288", mac: "52:54:00:9f:91:7d", ip: "192.168.61.187"} in network mk-custom-flannel-389288
	I0318 21:46:09.483558   52561 main.go:141] libmachine: (custom-flannel-389288) Reserved static IP address: 192.168.61.187
	I0318 21:46:09.483587   52561 main.go:141] libmachine: (custom-flannel-389288) Waiting for SSH to be available...
	I0318 21:46:09.483606   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Getting to WaitForSSH function...
	I0318 21:46:09.486869   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:09.487281   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288
	I0318 21:46:09.487302   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | unable to find defined IP address of network mk-custom-flannel-389288 interface with MAC address 52:54:00:9f:91:7d
	I0318 21:46:09.487425   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Using SSH client type: external
	I0318 21:46:09.487441   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa (-rw-------)
	I0318 21:46:09.487467   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:46:09.487481   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | About to run SSH command:
	I0318 21:46:09.487491   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | exit 0
	I0318 21:46:09.491380   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | SSH cmd err, output: exit status 255: 
	I0318 21:46:09.491406   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 21:46:09.491417   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | command : exit 0
	I0318 21:46:09.491429   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | err     : exit status 255
	I0318 21:46:09.491462   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | output  : 
	I0318 21:46:12.493051   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Getting to WaitForSSH function...
	I0318 21:46:12.495398   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.495849   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:12.495893   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.496034   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Using SSH client type: external
	I0318 21:46:12.496066   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa (-rw-------)
	I0318 21:46:12.496120   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:46:12.496142   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | About to run SSH command:
	I0318 21:46:12.496157   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | exit 0
	I0318 21:46:12.629154   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | SSH cmd err, output: <nil>: 
	I0318 21:46:12.629422   52561 main.go:141] libmachine: (custom-flannel-389288) KVM machine creation complete!
	I0318 21:46:12.629691   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetConfigRaw
	I0318 21:46:12.630180   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:12.630391   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:12.630547   52561 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 21:46:12.630562   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetState
	I0318 21:46:12.631720   52561 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 21:46:12.631739   52561 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 21:46:12.631747   52561 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 21:46:12.631756   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:12.634157   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.634494   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:12.634520   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.634673   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:12.634854   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.634996   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.635136   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:12.635298   52561 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:12.635528   52561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0318 21:46:12.635543   52561 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 21:46:12.736365   52561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:46:12.736386   52561 main.go:141] libmachine: Detecting the provisioner...
	I0318 21:46:12.736408   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:12.739511   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.739960   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:12.739990   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.740199   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:12.740386   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.740558   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:08.228986   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:08.728550   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:09.228401   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:09.729013   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:10.228310   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:10.728465   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:11.229281   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:11.728266   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:12.229020   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:12.728858   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:13.930251   52904 start.go:364] duration metric: took 31.271077346s to acquireMachinesLock for "kubernetes-upgrade-397473"
	I0318 21:46:13.930313   52904 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:46:13.930324   52904 fix.go:54] fixHost starting: 
	I0318 21:46:13.930712   52904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:13.930764   52904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:13.950769   52904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I0318 21:46:13.951266   52904 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:13.951802   52904 main.go:141] libmachine: Using API Version  1
	I0318 21:46:13.951829   52904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:13.952204   52904 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:13.952375   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:13.952552   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetState
	I0318 21:46:13.954079   52904 fix.go:112] recreateIfNeeded on kubernetes-upgrade-397473: state=Running err=<nil>
	W0318 21:46:13.954104   52904 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:46:13.956340   52904 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-397473" VM ...
	I0318 21:46:12.740734   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:12.743022   52561 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:12.743236   52561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0318 21:46:12.743253   52561 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 21:46:12.850419   52561 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 21:46:12.850499   52561 main.go:141] libmachine: found compatible host: buildroot
	I0318 21:46:12.850507   52561 main.go:141] libmachine: Provisioning with buildroot...
	I0318 21:46:12.850516   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetMachineName
	I0318 21:46:12.850800   52561 buildroot.go:166] provisioning hostname "custom-flannel-389288"
	I0318 21:46:12.850825   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetMachineName
	I0318 21:46:12.851043   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:12.853901   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.854301   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:12.854337   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.854476   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:12.854650   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.854825   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.854977   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:12.855145   52561 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:12.855348   52561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0318 21:46:12.855367   52561 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-389288 && echo "custom-flannel-389288" | sudo tee /etc/hostname
	I0318 21:46:12.974181   52561 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-389288
	
	I0318 21:46:12.974206   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:12.976929   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.977228   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:12.977256   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:12.977440   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:12.977637   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.977841   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:12.977965   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:12.978094   52561 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:12.978268   52561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0318 21:46:12.978294   52561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-389288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-389288/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-389288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:46:13.090554   52561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:46:13.090582   52561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:46:13.090618   52561 buildroot.go:174] setting up certificates
	I0318 21:46:13.090634   52561 provision.go:84] configureAuth start
	I0318 21:46:13.090649   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetMachineName
	I0318 21:46:13.090904   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetIP
	I0318 21:46:13.093659   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.093993   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.094025   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.094171   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.096096   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.096406   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.096435   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.096496   52561 provision.go:143] copyHostCerts
	I0318 21:46:13.096553   52561 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:46:13.096566   52561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:46:13.096618   52561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:46:13.096720   52561 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:46:13.096731   52561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:46:13.096759   52561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:46:13.096833   52561 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:46:13.096844   52561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:46:13.096873   52561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:46:13.096976   52561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-389288 san=[127.0.0.1 192.168.61.187 custom-flannel-389288 localhost minikube]
	I0318 21:46:13.208771   52561 provision.go:177] copyRemoteCerts
	I0318 21:46:13.208822   52561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:46:13.208845   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.211378   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.211856   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.211890   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.212011   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:13.212200   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.212372   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:13.212523   52561 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa Username:docker}
	I0318 21:46:13.297807   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:46:13.326332   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:46:13.353686   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:46:13.381510   52561 provision.go:87] duration metric: took 290.859318ms to configureAuth
	I0318 21:46:13.381539   52561 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:46:13.381733   52561 config.go:182] Loaded profile config "custom-flannel-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:46:13.381818   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.384696   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.385038   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.385064   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.385235   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:13.385421   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.385565   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.385698   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:13.385864   52561 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:13.386042   52561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0318 21:46:13.386063   52561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:46:13.671591   52561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:46:13.671629   52561 main.go:141] libmachine: Checking connection to Docker...
	I0318 21:46:13.671644   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetURL
	I0318 21:46:13.673080   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | Using libvirt version 6000000
	I0318 21:46:13.675297   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.675626   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.675648   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.675815   52561 main.go:141] libmachine: Docker is up and running!
	I0318 21:46:13.675831   52561 main.go:141] libmachine: Reticulating splines...
	I0318 21:46:13.675840   52561 client.go:171] duration metric: took 29.018539262s to LocalClient.Create
	I0318 21:46:13.675867   52561 start.go:167] duration metric: took 29.01860885s to libmachine.API.Create "custom-flannel-389288"
	I0318 21:46:13.675880   52561 start.go:293] postStartSetup for "custom-flannel-389288" (driver="kvm2")
	I0318 21:46:13.675895   52561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:46:13.675920   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:13.676164   52561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:46:13.676187   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.678628   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.679043   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.679071   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.679264   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:13.679443   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.679633   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:13.679850   52561 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa Username:docker}
	I0318 21:46:13.763846   52561 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:46:13.768653   52561 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:46:13.768675   52561 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:46:13.768741   52561 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:46:13.768846   52561 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:46:13.768979   52561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:46:13.778618   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:46:13.807745   52561 start.go:296] duration metric: took 131.849675ms for postStartSetup
	I0318 21:46:13.807789   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetConfigRaw
	I0318 21:46:13.808328   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetIP
	I0318 21:46:13.810722   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.811059   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.811087   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.811324   52561 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/config.json ...
	I0318 21:46:13.811490   52561 start.go:128] duration metric: took 29.176362888s to createHost
	I0318 21:46:13.811513   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.813906   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.814244   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.814284   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.814384   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:13.814564   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.814742   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.814900   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:13.815092   52561 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:13.815293   52561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0318 21:46:13.815305   52561 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:46:13.930099   52561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710798373.867881000
	
	I0318 21:46:13.930121   52561 fix.go:216] guest clock: 1710798373.867881000
	I0318 21:46:13.930127   52561 fix.go:229] Guest: 2024-03-18 21:46:13.867881 +0000 UTC Remote: 2024-03-18 21:46:13.811500858 +0000 UTC m=+66.132687388 (delta=56.380142ms)
	I0318 21:46:13.930159   52561 fix.go:200] guest clock delta is within tolerance: 56.380142ms
	I0318 21:46:13.930163   52561 start.go:83] releasing machines lock for "custom-flannel-389288", held for 29.295221369s
	I0318 21:46:13.930189   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:13.930491   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetIP
	I0318 21:46:13.933733   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.934230   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.934268   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.934444   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:13.935136   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:13.935328   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .DriverName
	I0318 21:46:13.935420   52561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:46:13.935501   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.935588   52561 ssh_runner.go:195] Run: cat /version.json
	I0318 21:46:13.935614   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHHostname
	I0318 21:46:13.938396   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.938417   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.938827   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.938861   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.938949   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:13.938981   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:13.939039   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:13.939194   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHPort
	I0318 21:46:13.939280   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.939358   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHKeyPath
	I0318 21:46:13.939457   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:13.939527   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetSSHUsername
	I0318 21:46:13.939592   52561 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa Username:docker}
	I0318 21:46:13.939681   52561 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/custom-flannel-389288/id_rsa Username:docker}
	I0318 21:46:14.022676   52561 ssh_runner.go:195] Run: systemctl --version
	I0318 21:46:14.047722   52561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:46:14.213965   52561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:46:14.221722   52561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:46:14.221798   52561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:46:14.241489   52561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:46:14.241517   52561 start.go:494] detecting cgroup driver to use...
	I0318 21:46:14.241578   52561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:46:14.263429   52561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:46:14.286115   52561 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:46:14.286177   52561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:46:14.303583   52561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:46:14.321728   52561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:46:14.446525   52561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:46:14.598916   52561 docker.go:233] disabling docker service ...
	I0318 21:46:14.598977   52561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:46:14.616199   52561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:46:14.631747   52561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:46:14.789403   52561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:46:14.918446   52561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:46:14.934885   52561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:46:14.955334   52561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:46:14.955417   52561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:14.966059   52561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:46:14.966109   52561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:14.977409   52561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:14.987925   52561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:14.998816   52561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:46:15.009646   52561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:15.020380   52561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:15.038654   52561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:15.048989   52561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:46:15.058480   52561 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:46:15.058533   52561 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:46:15.071907   52561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:46:15.081614   52561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:46:15.197411   52561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:46:15.353302   52561 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:46:15.353366   52561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:46:15.358602   52561 start.go:562] Will wait 60s for crictl version
	I0318 21:46:15.358657   52561 ssh_runner.go:195] Run: which crictl
	I0318 21:46:15.362831   52561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:46:15.405936   52561 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:46:15.406030   52561 ssh_runner.go:195] Run: crio --version
	I0318 21:46:15.437362   52561 ssh_runner.go:195] Run: crio --version
	I0318 21:46:15.471317   52561 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:46:12.563117   54622 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:46:12.563146   54622 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 21:46:12.563154   54622 cache.go:56] Caching tarball of preloaded images
	I0318 21:46:12.563207   54622 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 21:46:12.563224   54622 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 21:46:12.563300   54622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/config.json ...
	I0318 21:46:12.563316   54622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/config.json: {Name:mka96a00415e6c63f66d59851503672fd4245979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:12.563443   54622 start.go:360] acquireMachinesLock for enable-default-cni-389288: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:46:13.957855   52904 machine.go:94] provisionDockerMachine start ...
	I0318 21:46:13.957887   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:13.958158   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:13.960958   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:13.961456   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:13.961496   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:13.961658   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:13.961846   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:13.962007   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:13.962160   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:13.962357   52904 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:13.962582   52904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:46:13.962595   52904 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:46:14.088176   52904 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-397473
	
	I0318 21:46:14.088208   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:46:14.088470   52904 buildroot.go:166] provisioning hostname "kubernetes-upgrade-397473"
	I0318 21:46:14.088503   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:46:14.088714   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:14.091126   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.091396   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:14.091451   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.091657   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:14.091847   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.091989   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.092109   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:14.092287   52904 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:14.092505   52904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:46:14.092523   52904 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-397473 && echo "kubernetes-upgrade-397473" | sudo tee /etc/hostname
	I0318 21:46:14.231496   52904 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-397473
	
	I0318 21:46:14.231533   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:14.234833   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.235255   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:14.235288   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.235506   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:14.235720   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.235898   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.236054   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:14.236248   52904 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:14.236456   52904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:46:14.236490   52904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-397473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-397473/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-397473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:46:14.359537   52904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:46:14.359570   52904 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:46:14.359610   52904 buildroot.go:174] setting up certificates
	I0318 21:46:14.359621   52904 provision.go:84] configureAuth start
	I0318 21:46:14.359639   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetMachineName
	I0318 21:46:14.359937   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:46:14.362935   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.363310   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:14.363351   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.363472   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:14.365826   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.366263   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:14.366306   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.366493   52904 provision.go:143] copyHostCerts
	I0318 21:46:14.366577   52904 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:46:14.366589   52904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:46:14.366656   52904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:46:14.366760   52904 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:46:14.366771   52904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:46:14.366800   52904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:46:14.366877   52904 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:46:14.366886   52904 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:46:14.366915   52904 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:46:14.366977   52904 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-397473 san=[127.0.0.1 192.168.39.139 kubernetes-upgrade-397473 localhost minikube]
	I0318 21:46:14.462567   52904 provision.go:177] copyRemoteCerts
	I0318 21:46:14.462615   52904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:46:14.462635   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:14.465468   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.465912   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:14.465944   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.466150   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:14.466357   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.466509   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:14.466636   52904 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:46:14.557465   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:46:14.595088   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0318 21:46:14.644648   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:46:14.677484   52904 provision.go:87] duration metric: took 317.845808ms to configureAuth
	I0318 21:46:14.677517   52904 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:46:14.677737   52904 config.go:182] Loaded profile config "kubernetes-upgrade-397473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:46:14.677828   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:14.680742   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.681141   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:14.681168   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:14.681391   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:14.681600   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.681801   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:14.681965   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:14.682148   52904 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:14.682331   52904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:46:14.682359   52904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:46:15.472580   52561 main.go:141] libmachine: (custom-flannel-389288) Calling .GetIP
	I0318 21:46:15.475017   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:15.475326   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:91:7d", ip: ""} in network mk-custom-flannel-389288: {Iface:virbr2 ExpiryTime:2024-03-18 22:46:02 +0000 UTC Type:0 Mac:52:54:00:9f:91:7d Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:custom-flannel-389288 Clientid:01:52:54:00:9f:91:7d}
	I0318 21:46:15.475348   52561 main.go:141] libmachine: (custom-flannel-389288) DBG | domain custom-flannel-389288 has defined IP address 192.168.61.187 and MAC address 52:54:00:9f:91:7d in network mk-custom-flannel-389288
	I0318 21:46:15.475565   52561 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:46:15.479988   52561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:46:15.493566   52561 kubeadm.go:877] updating cluster {Name:custom-flannel-389288 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:custom-flannel-389288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.187 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:46:15.493662   52561 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:46:15.493703   52561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:46:15.527787   52561 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:46:15.527847   52561 ssh_runner.go:195] Run: which lz4
	I0318 21:46:15.532292   52561 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:46:15.536717   52561 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:46:15.536747   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:46:17.539684   52561 crio.go:462] duration metric: took 2.007421222s to copy over tarball
	I0318 21:46:17.539780   52561 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:46:13.228965   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:13.728390   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:14.228445   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:14.728335   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:15.228949   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:15.728330   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:16.229241   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:16.729013   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:17.229278   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:17.728976   52399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 21:46:17.839333   52399 kubeadm.go:1107] duration metric: took 10.309798486s to wait for elevateKubeSystemPrivileges
	W0318 21:46:17.839370   52399 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 21:46:17.839378   52399 kubeadm.go:393] duration metric: took 24.117645749s to StartCluster
	I0318 21:46:17.839392   52399 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:17.839470   52399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:46:17.840858   52399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:17.841128   52399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 21:46:17.841136   52399 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.50.206 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:46:17.842843   52399 out.go:177] * Verifying Kubernetes components...
	I0318 21:46:17.841238   52399 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:46:17.842882   52399 addons.go:69] Setting storage-provisioner=true in profile "calico-389288"
	I0318 21:46:17.842917   52399 addons.go:234] Setting addon storage-provisioner=true in "calico-389288"
	I0318 21:46:17.844366   52399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:46:17.841377   52399 config.go:182] Loaded profile config "calico-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:46:17.842933   52399 addons.go:69] Setting default-storageclass=true in profile "calico-389288"
	I0318 21:46:17.844467   52399 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-389288"
	I0318 21:46:17.842976   52399 host.go:66] Checking if "calico-389288" exists ...
	I0318 21:46:17.844892   52399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:17.844942   52399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:17.844963   52399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:17.844991   52399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:17.865638   52399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0318 21:46:17.865649   52399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43265
	I0318 21:46:17.866090   52399 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:17.866282   52399 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:17.866725   52399 main.go:141] libmachine: Using API Version  1
	I0318 21:46:17.866750   52399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:17.867083   52399 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:17.867213   52399 main.go:141] libmachine: Using API Version  1
	I0318 21:46:17.867235   52399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:17.867645   52399 main.go:141] libmachine: (calico-389288) Calling .GetState
	I0318 21:46:17.868454   52399 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:17.869037   52399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:17.869066   52399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:17.870948   52399 addons.go:234] Setting addon default-storageclass=true in "calico-389288"
	I0318 21:46:17.870996   52399 host.go:66] Checking if "calico-389288" exists ...
	I0318 21:46:17.871342   52399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:17.871385   52399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:17.886227   52399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0318 21:46:17.886691   52399 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:17.887345   52399 main.go:141] libmachine: Using API Version  1
	I0318 21:46:17.887369   52399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:17.887754   52399 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:17.888321   52399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:17.888364   52399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:17.889250   52399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0318 21:46:17.889693   52399 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:17.890356   52399 main.go:141] libmachine: Using API Version  1
	I0318 21:46:17.890378   52399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:17.890764   52399 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:17.890947   52399 main.go:141] libmachine: (calico-389288) Calling .GetState
	I0318 21:46:17.892713   52399 main.go:141] libmachine: (calico-389288) Calling .DriverName
	I0318 21:46:17.894719   52399 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:46:17.896089   52399 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:46:17.896108   52399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:46:17.896126   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHHostname
	I0318 21:46:17.904668   52399 main.go:141] libmachine: (calico-389288) DBG | domain calico-389288 has defined MAC address 52:54:00:7c:77:f5 in network mk-calico-389288
	I0318 21:46:17.905083   52399 main.go:141] libmachine: (calico-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-calico-389288: {Iface:virbr1 ExpiryTime:2024-03-18 22:45:37 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:calico-389288 Clientid:01:52:54:00:7c:77:f5}
	I0318 21:46:17.905108   52399 main.go:141] libmachine: (calico-389288) DBG | domain calico-389288 has defined IP address 192.168.50.206 and MAC address 52:54:00:7c:77:f5 in network mk-calico-389288
	I0318 21:46:17.905415   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHPort
	I0318 21:46:17.905627   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHKeyPath
	I0318 21:46:17.905795   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHUsername
	I0318 21:46:17.905950   52399 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/calico-389288/id_rsa Username:docker}
	I0318 21:46:17.906330   52399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35097
	I0318 21:46:17.906798   52399 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:17.907318   52399 main.go:141] libmachine: Using API Version  1
	I0318 21:46:17.907342   52399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:17.907728   52399 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:17.908016   52399 main.go:141] libmachine: (calico-389288) Calling .GetState
	I0318 21:46:17.909636   52399 main.go:141] libmachine: (calico-389288) Calling .DriverName
	I0318 21:46:17.909905   52399 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:46:17.909924   52399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:46:17.909945   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHHostname
	I0318 21:46:17.913250   52399 main.go:141] libmachine: (calico-389288) DBG | domain calico-389288 has defined MAC address 52:54:00:7c:77:f5 in network mk-calico-389288
	I0318 21:46:17.913730   52399 main.go:141] libmachine: (calico-389288) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-calico-389288: {Iface:virbr1 ExpiryTime:2024-03-18 22:45:37 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.50.206 Prefix:24 Hostname:calico-389288 Clientid:01:52:54:00:7c:77:f5}
	I0318 21:46:17.913757   52399 main.go:141] libmachine: (calico-389288) DBG | domain calico-389288 has defined IP address 192.168.50.206 and MAC address 52:54:00:7c:77:f5 in network mk-calico-389288
	I0318 21:46:17.913917   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHPort
	I0318 21:46:17.914094   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHKeyPath
	I0318 21:46:17.914277   52399 main.go:141] libmachine: (calico-389288) Calling .GetSSHUsername
	I0318 21:46:17.914438   52399 sshutil.go:53] new ssh client: &{IP:192.168.50.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/calico-389288/id_rsa Username:docker}
	I0318 21:46:18.033171   52399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 21:46:18.080449   52399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:46:18.234973   52399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:46:18.251148   52399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:46:19.190387   52399 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.10989329s)
	I0318 21:46:19.191411   52399 node_ready.go:35] waiting up to 15m0s for node "calico-389288" to be "Ready" ...
	I0318 21:46:19.191767   52399 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15856151s)
	I0318 21:46:19.191790   52399 start.go:948] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0318 21:46:19.707294   52399 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-389288" context rescaled to 1 replicas
	I0318 21:46:19.792591   52399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.557552295s)
	I0318 21:46:19.792623   52399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.541426779s)
	I0318 21:46:19.792655   52399 main.go:141] libmachine: Making call to close driver server
	I0318 21:46:19.792667   52399 main.go:141] libmachine: (calico-389288) Calling .Close
	I0318 21:46:19.792669   52399 main.go:141] libmachine: Making call to close driver server
	I0318 21:46:19.792683   52399 main.go:141] libmachine: (calico-389288) Calling .Close
	I0318 21:46:19.793076   52399 main.go:141] libmachine: (calico-389288) DBG | Closing plugin on server side
	I0318 21:46:19.793088   52399 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:46:19.793101   52399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:46:19.793109   52399 main.go:141] libmachine: Making call to close driver server
	I0318 21:46:19.793117   52399 main.go:141] libmachine: (calico-389288) Calling .Close
	I0318 21:46:19.793148   52399 main.go:141] libmachine: (calico-389288) DBG | Closing plugin on server side
	I0318 21:46:19.793364   52399 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:46:19.793407   52399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:46:19.793487   52399 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:46:19.793508   52399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:46:19.793536   52399 main.go:141] libmachine: Making call to close driver server
	I0318 21:46:19.793548   52399 main.go:141] libmachine: (calico-389288) Calling .Close
	I0318 21:46:19.794710   52399 main.go:141] libmachine: (calico-389288) DBG | Closing plugin on server side
	I0318 21:46:19.795186   52399 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:46:19.795206   52399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:46:19.806387   52399 main.go:141] libmachine: Making call to close driver server
	I0318 21:46:19.806413   52399 main.go:141] libmachine: (calico-389288) Calling .Close
	I0318 21:46:19.806696   52399 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:46:19.806715   52399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:46:19.808338   52399 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 21:46:20.628726   52561 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088909555s)
	I0318 21:46:20.628760   52561 crio.go:469] duration metric: took 3.089041974s to extract the tarball
	I0318 21:46:20.628770   52561 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:46:20.676983   52561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:46:20.728054   52561 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:46:20.728079   52561 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:46:20.728087   52561 kubeadm.go:928] updating node { 192.168.61.187 8443 v1.28.4 crio true true} ...
	I0318 21:46:20.728208   52561 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-389288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-389288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0318 21:46:20.728287   52561 ssh_runner.go:195] Run: crio config
	I0318 21:46:20.784932   52561 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0318 21:46:20.784978   52561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:46:20.784998   52561 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.187 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-389288 NodeName:custom-flannel-389288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:46:20.785128   52561 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-389288"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:46:20.785182   52561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:46:20.801707   52561 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:46:20.801765   52561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:46:20.818493   52561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:46:20.841770   52561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:46:20.863502   52561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0318 21:46:20.883966   52561 ssh_runner.go:195] Run: grep 192.168.61.187	control-plane.minikube.internal$ /etc/hosts
	I0318 21:46:20.889181   52561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:46:20.903419   52561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:46:21.046815   52561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:46:21.066903   52561 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288 for IP: 192.168.61.187
	I0318 21:46:21.066931   52561 certs.go:194] generating shared ca certs ...
	I0318 21:46:21.066959   52561 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.067141   52561 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:46:21.067219   52561 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:46:21.067235   52561 certs.go:256] generating profile certs ...
	I0318 21:46:21.067304   52561 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.key
	I0318 21:46:21.067329   52561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt with IP's: []
	I0318 21:46:21.182649   52561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt ...
	I0318 21:46:21.182677   52561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: {Name:mk6967627ea5208125856518cda7779ccf60979f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.182839   52561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.key ...
	I0318 21:46:21.182856   52561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.key: {Name:mk34a0534940585ac02c8fbdaf71255f071bd64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.182947   52561 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.key.a89833d6
	I0318 21:46:21.182963   52561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.crt.a89833d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.187]
	I0318 21:46:21.538797   52561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.crt.a89833d6 ...
	I0318 21:46:21.538829   52561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.crt.a89833d6: {Name:mkec3e6406812697dd3ac1d234163733f2cb0f7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.538986   52561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.key.a89833d6 ...
	I0318 21:46:21.539003   52561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.key.a89833d6: {Name:mk7165ad91f4850a4297e00ec184bf5c6564785c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.539103   52561 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.crt.a89833d6 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.crt
	I0318 21:46:21.539214   52561 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.key.a89833d6 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.key
	I0318 21:46:21.539292   52561 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.key
	I0318 21:46:21.539313   52561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.crt with IP's: []
	I0318 21:46:21.721765   52561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.crt ...
	I0318 21:46:21.721801   52561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.crt: {Name:mk57119c73cd0cdf7b8fc898a4f5b5987779f6e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.722008   52561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.key ...
	I0318 21:46:21.722030   52561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.key: {Name:mke1b81690cc8b540e681de56a4bc41ddb8e4bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:21.722283   52561 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:46:21.722338   52561 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:46:21.722358   52561 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:46:21.722390   52561 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:46:21.722429   52561 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:46:21.722464   52561 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:46:21.722523   52561 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:46:21.723386   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:46:21.753066   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:46:21.783953   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:46:21.821247   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:46:21.849790   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:46:21.876606   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:46:21.902896   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:46:21.930479   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:46:21.957857   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:46:21.985373   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:46:22.012172   52561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:46:22.040286   52561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:46:22.060084   52561 ssh_runner.go:195] Run: openssl version
	I0318 21:46:22.066388   52561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:46:22.078941   52561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:46:22.084196   52561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:46:22.084253   52561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:46:22.090661   52561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:46:22.103532   52561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:46:22.117551   52561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:46:22.122792   52561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:46:22.122841   52561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:46:22.130092   52561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:46:22.143933   52561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:46:22.157913   52561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:46:22.162952   52561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:46:22.162992   52561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:46:22.169085   52561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:46:22.181393   52561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:46:22.186279   52561 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 21:46:22.186336   52561 kubeadm.go:391] StartCluster: {Name:custom-flannel-389288 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:custom-flannel-389288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.187 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:46:22.186433   52561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:46:22.186490   52561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:46:22.225398   52561 cri.go:89] found id: ""
	I0318 21:46:22.225494   52561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 21:46:22.237272   52561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:46:22.248927   52561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:46:22.261297   52561 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:46:22.261317   52561 kubeadm.go:156] found existing configuration files:
	
	I0318 21:46:22.261366   52561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:46:22.273793   52561 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:46:22.273876   52561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:46:22.285665   52561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:46:22.296772   52561 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:46:22.296818   52561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:46:22.308012   52561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:46:22.320177   52561 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:46:22.320236   52561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:46:22.332438   52561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:46:22.345743   52561 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:46:22.345828   52561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:46:22.357078   52561 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 21:46:22.422629   52561 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 21:46:22.422890   52561 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 21:46:22.576473   52561 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 21:46:22.576637   52561 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 21:46:22.576829   52561 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 21:46:19.809916   52399 addons.go:505] duration metric: took 1.968678055s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 21:46:21.421708   52399 node_ready.go:53] node "calico-389288" has status "Ready":"False"
	I0318 21:46:22.866416   52561 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 21:46:23.448483   54622 start.go:364] duration metric: took 10.884985162s to acquireMachinesLock for "enable-default-cni-389288"
	I0318 21:46:23.448541   54622 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-389288 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-389288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:46:23.448679   54622 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 21:46:22.954940   52561 out.go:204]   - Generating certificates and keys ...
	I0318 21:46:22.955062   52561 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 21:46:22.955156   52561 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 21:46:23.001548   52561 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 21:46:23.139655   52561 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 21:46:23.220440   52561 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 21:46:23.402639   52561 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 21:46:23.467822   52561 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 21:46:23.468193   52561 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-389288 localhost] and IPs [192.168.61.187 127.0.0.1 ::1]
	I0318 21:46:24.441178   52561 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 21:46:24.441521   52561 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-389288 localhost] and IPs [192.168.61.187 127.0.0.1 ::1]
	I0318 21:46:24.746112   52561 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 21:46:24.903172   52561 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 21:46:25.031821   52561 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 21:46:25.032144   52561 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 21:46:25.881618   52561 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 21:46:26.174218   52561 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 21:46:26.460355   52561 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 21:46:26.799341   52561 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 21:46:26.800368   52561 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 21:46:26.806486   52561 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 21:46:23.450616   54622 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 21:46:23.450847   54622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:46:23.450938   54622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:46:23.470350   54622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
	I0318 21:46:23.470773   54622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:46:23.471381   54622 main.go:141] libmachine: Using API Version  1
	I0318 21:46:23.471403   54622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:46:23.471744   54622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:46:23.471912   54622 main.go:141] libmachine: (enable-default-cni-389288) Calling .GetMachineName
	I0318 21:46:23.472067   54622 main.go:141] libmachine: (enable-default-cni-389288) Calling .DriverName
	I0318 21:46:23.472180   54622 start.go:159] libmachine.API.Create for "enable-default-cni-389288" (driver="kvm2")
	I0318 21:46:23.472221   54622 client.go:168] LocalClient.Create starting
	I0318 21:46:23.472256   54622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 21:46:23.472296   54622 main.go:141] libmachine: Decoding PEM data...
	I0318 21:46:23.472320   54622 main.go:141] libmachine: Parsing certificate...
	I0318 21:46:23.472391   54622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 21:46:23.472416   54622 main.go:141] libmachine: Decoding PEM data...
	I0318 21:46:23.472428   54622 main.go:141] libmachine: Parsing certificate...
	I0318 21:46:23.472450   54622 main.go:141] libmachine: Running pre-create checks...
	I0318 21:46:23.472459   54622 main.go:141] libmachine: (enable-default-cni-389288) Calling .PreCreateCheck
	I0318 21:46:23.472859   54622 main.go:141] libmachine: (enable-default-cni-389288) Calling .GetConfigRaw
	I0318 21:46:23.476374   54622 main.go:141] libmachine: Creating machine...
	I0318 21:46:23.476392   54622 main.go:141] libmachine: (enable-default-cni-389288) Calling .Create
	I0318 21:46:23.476986   54622 main.go:141] libmachine: (enable-default-cni-389288) Creating KVM machine...
	I0318 21:46:23.478064   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | found existing default KVM network
	I0318 21:46:23.481771   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:23.479403   54713 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:ab:eb} reservation:<nil>}
	I0318 21:46:23.481792   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:23.480576   54713 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:61:1b:fe} reservation:<nil>}
	I0318 21:46:23.481848   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:23.481810   54713 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:77:d6:54} reservation:<nil>}
	I0318 21:46:23.483255   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:23.483150   54713 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289a90}
	I0318 21:46:23.483296   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | created network xml: 
	I0318 21:46:23.483306   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | <network>
	I0318 21:46:23.483316   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |   <name>mk-enable-default-cni-389288</name>
	I0318 21:46:23.483329   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |   <dns enable='no'/>
	I0318 21:46:23.483338   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |   
	I0318 21:46:23.483346   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0318 21:46:23.483355   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |     <dhcp>
	I0318 21:46:23.483364   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0318 21:46:23.483372   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |     </dhcp>
	I0318 21:46:23.483378   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |   </ip>
	I0318 21:46:23.483386   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG |   
	I0318 21:46:23.483394   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | </network>
	I0318 21:46:23.483403   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | 
	I0318 21:46:23.489128   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | trying to create private KVM network mk-enable-default-cni-389288 192.168.72.0/24...
	I0318 21:46:23.584822   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | private KVM network mk-enable-default-cni-389288 192.168.72.0/24 created
	I0318 21:46:23.584866   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:23.584770   54713 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:46:23.584895   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288 ...
	I0318 21:46:23.584951   54622 main.go:141] libmachine: (enable-default-cni-389288) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 21:46:23.584971   54622 main.go:141] libmachine: (enable-default-cni-389288) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 21:46:23.912594   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:23.912430   54713 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288/id_rsa...
	I0318 21:46:24.086106   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:24.085964   54713 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288/enable-default-cni-389288.rawdisk...
	I0318 21:46:24.086140   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Writing magic tar header
	I0318 21:46:24.086159   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Writing SSH key tar header
	I0318 21:46:24.086171   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:24.086089   54713 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288 ...
	I0318 21:46:24.086194   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288
	I0318 21:46:24.086260   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 21:46:24.086280   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:46:24.086294   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 21:46:24.086310   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 21:46:24.086452   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home/jenkins
	I0318 21:46:24.086467   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Checking permissions on dir: /home
	I0318 21:46:24.086478   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | Skipping /home - not owner
	I0318 21:46:24.086497   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288 (perms=drwx------)
	I0318 21:46:24.086512   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 21:46:24.086530   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 21:46:24.086544   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 21:46:24.086559   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 21:46:24.086572   54622 main.go:141] libmachine: (enable-default-cni-389288) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 21:46:24.086582   54622 main.go:141] libmachine: (enable-default-cni-389288) Creating domain...
	I0318 21:46:24.087849   54622 main.go:141] libmachine: (enable-default-cni-389288) define libvirt domain using xml: 
	I0318 21:46:24.087873   54622 main.go:141] libmachine: (enable-default-cni-389288) <domain type='kvm'>
	I0318 21:46:24.087886   54622 main.go:141] libmachine: (enable-default-cni-389288)   <name>enable-default-cni-389288</name>
	I0318 21:46:24.087893   54622 main.go:141] libmachine: (enable-default-cni-389288)   <memory unit='MiB'>3072</memory>
	I0318 21:46:24.087902   54622 main.go:141] libmachine: (enable-default-cni-389288)   <vcpu>2</vcpu>
	I0318 21:46:24.087909   54622 main.go:141] libmachine: (enable-default-cni-389288)   <features>
	I0318 21:46:24.087918   54622 main.go:141] libmachine: (enable-default-cni-389288)     <acpi/>
	I0318 21:46:24.087925   54622 main.go:141] libmachine: (enable-default-cni-389288)     <apic/>
	I0318 21:46:24.087932   54622 main.go:141] libmachine: (enable-default-cni-389288)     <pae/>
	I0318 21:46:24.087938   54622 main.go:141] libmachine: (enable-default-cni-389288)     
	I0318 21:46:24.087947   54622 main.go:141] libmachine: (enable-default-cni-389288)   </features>
	I0318 21:46:24.087954   54622 main.go:141] libmachine: (enable-default-cni-389288)   <cpu mode='host-passthrough'>
	I0318 21:46:24.087962   54622 main.go:141] libmachine: (enable-default-cni-389288)   
	I0318 21:46:24.087969   54622 main.go:141] libmachine: (enable-default-cni-389288)   </cpu>
	I0318 21:46:24.087977   54622 main.go:141] libmachine: (enable-default-cni-389288)   <os>
	I0318 21:46:24.087984   54622 main.go:141] libmachine: (enable-default-cni-389288)     <type>hvm</type>
	I0318 21:46:24.087993   54622 main.go:141] libmachine: (enable-default-cni-389288)     <boot dev='cdrom'/>
	I0318 21:46:24.088000   54622 main.go:141] libmachine: (enable-default-cni-389288)     <boot dev='hd'/>
	I0318 21:46:24.088013   54622 main.go:141] libmachine: (enable-default-cni-389288)     <bootmenu enable='no'/>
	I0318 21:46:24.088020   54622 main.go:141] libmachine: (enable-default-cni-389288)   </os>
	I0318 21:46:24.088027   54622 main.go:141] libmachine: (enable-default-cni-389288)   <devices>
	I0318 21:46:24.088035   54622 main.go:141] libmachine: (enable-default-cni-389288)     <disk type='file' device='cdrom'>
	I0318 21:46:24.088047   54622 main.go:141] libmachine: (enable-default-cni-389288)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288/boot2docker.iso'/>
	I0318 21:46:24.088055   54622 main.go:141] libmachine: (enable-default-cni-389288)       <target dev='hdc' bus='scsi'/>
	I0318 21:46:24.088063   54622 main.go:141] libmachine: (enable-default-cni-389288)       <readonly/>
	I0318 21:46:24.088070   54622 main.go:141] libmachine: (enable-default-cni-389288)     </disk>
	I0318 21:46:24.088078   54622 main.go:141] libmachine: (enable-default-cni-389288)     <disk type='file' device='disk'>
	I0318 21:46:24.088088   54622 main.go:141] libmachine: (enable-default-cni-389288)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 21:46:24.088115   54622 main.go:141] libmachine: (enable-default-cni-389288)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/enable-default-cni-389288/enable-default-cni-389288.rawdisk'/>
	I0318 21:46:24.088124   54622 main.go:141] libmachine: (enable-default-cni-389288)       <target dev='hda' bus='virtio'/>
	I0318 21:46:24.088132   54622 main.go:141] libmachine: (enable-default-cni-389288)     </disk>
	I0318 21:46:24.088140   54622 main.go:141] libmachine: (enable-default-cni-389288)     <interface type='network'>
	I0318 21:46:24.088150   54622 main.go:141] libmachine: (enable-default-cni-389288)       <source network='mk-enable-default-cni-389288'/>
	I0318 21:46:24.088157   54622 main.go:141] libmachine: (enable-default-cni-389288)       <model type='virtio'/>
	I0318 21:46:24.088165   54622 main.go:141] libmachine: (enable-default-cni-389288)     </interface>
	I0318 21:46:24.088173   54622 main.go:141] libmachine: (enable-default-cni-389288)     <interface type='network'>
	I0318 21:46:24.088182   54622 main.go:141] libmachine: (enable-default-cni-389288)       <source network='default'/>
	I0318 21:46:24.088191   54622 main.go:141] libmachine: (enable-default-cni-389288)       <model type='virtio'/>
	I0318 21:46:24.088200   54622 main.go:141] libmachine: (enable-default-cni-389288)     </interface>
	I0318 21:46:24.088208   54622 main.go:141] libmachine: (enable-default-cni-389288)     <serial type='pty'>
	I0318 21:46:24.088221   54622 main.go:141] libmachine: (enable-default-cni-389288)       <target port='0'/>
	I0318 21:46:24.088228   54622 main.go:141] libmachine: (enable-default-cni-389288)     </serial>
	I0318 21:46:24.088236   54622 main.go:141] libmachine: (enable-default-cni-389288)     <console type='pty'>
	I0318 21:46:24.088243   54622 main.go:141] libmachine: (enable-default-cni-389288)       <target type='serial' port='0'/>
	I0318 21:46:24.088251   54622 main.go:141] libmachine: (enable-default-cni-389288)     </console>
	I0318 21:46:24.088259   54622 main.go:141] libmachine: (enable-default-cni-389288)     <rng model='virtio'>
	I0318 21:46:24.088268   54622 main.go:141] libmachine: (enable-default-cni-389288)       <backend model='random'>/dev/random</backend>
	I0318 21:46:24.088275   54622 main.go:141] libmachine: (enable-default-cni-389288)     </rng>
	I0318 21:46:24.088282   54622 main.go:141] libmachine: (enable-default-cni-389288)     
	I0318 21:46:24.088288   54622 main.go:141] libmachine: (enable-default-cni-389288)     
	I0318 21:46:24.088296   54622 main.go:141] libmachine: (enable-default-cni-389288)   </devices>
	I0318 21:46:24.088302   54622 main.go:141] libmachine: (enable-default-cni-389288) </domain>
	I0318 21:46:24.088312   54622 main.go:141] libmachine: (enable-default-cni-389288) 
	I0318 21:46:24.092583   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c4:72:3f in network default
	I0318 21:46:24.093322   54622 main.go:141] libmachine: (enable-default-cni-389288) Ensuring networks are active...
	I0318 21:46:24.093344   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:24.094138   54622 main.go:141] libmachine: (enable-default-cni-389288) Ensuring network default is active
	I0318 21:46:24.094558   54622 main.go:141] libmachine: (enable-default-cni-389288) Ensuring network mk-enable-default-cni-389288 is active
	I0318 21:46:24.095153   54622 main.go:141] libmachine: (enable-default-cni-389288) Getting domain xml...
	I0318 21:46:24.095937   54622 main.go:141] libmachine: (enable-default-cni-389288) Creating domain...
	I0318 21:46:25.590682   54622 main.go:141] libmachine: (enable-default-cni-389288) Waiting to get IP...
	I0318 21:46:25.591829   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:25.592343   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:25.592371   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:25.592324   54713 retry.go:31] will retry after 221.882465ms: waiting for machine to come up
	I0318 21:46:25.815847   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:25.816414   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:25.816462   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:25.816385   54713 retry.go:31] will retry after 352.708657ms: waiting for machine to come up
	I0318 21:46:26.171215   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:26.171930   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:26.171958   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:26.171869   54713 retry.go:31] will retry after 409.802824ms: waiting for machine to come up
	I0318 21:46:26.583640   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:26.584477   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:26.584514   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:26.584428   54713 retry.go:31] will retry after 514.307378ms: waiting for machine to come up
	I0318 21:46:27.100057   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:27.100613   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:27.100640   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:27.100576   54713 retry.go:31] will retry after 527.743223ms: waiting for machine to come up
	I0318 21:46:23.168177   52904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:46:23.168226   52904 machine.go:97] duration metric: took 9.210328579s to provisionDockerMachine
	I0318 21:46:23.168243   52904 start.go:293] postStartSetup for "kubernetes-upgrade-397473" (driver="kvm2")
	I0318 21:46:23.168271   52904 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:46:23.168300   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:23.168601   52904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:46:23.168624   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:23.171662   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.172026   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:23.172051   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.172182   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:23.172382   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:23.172592   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:23.172754   52904 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:46:23.269644   52904 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:46:23.275494   52904 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:46:23.275519   52904 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:46:23.275584   52904 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:46:23.275703   52904 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:46:23.275829   52904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:46:23.289646   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:46:23.321477   52904 start.go:296] duration metric: took 153.215651ms for postStartSetup
	I0318 21:46:23.321519   52904 fix.go:56] duration metric: took 9.391194084s for fixHost
	I0318 21:46:23.321546   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:23.324789   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.325307   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:23.325361   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.325502   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:23.325668   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:23.325841   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:23.326049   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:23.326277   52904 main.go:141] libmachine: Using SSH client type: native
	I0318 21:46:23.326503   52904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.139 22 <nil> <nil>}
	I0318 21:46:23.326519   52904 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:46:23.448294   52904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710798383.443065085
	
	I0318 21:46:23.448323   52904 fix.go:216] guest clock: 1710798383.443065085
	I0318 21:46:23.448334   52904 fix.go:229] Guest: 2024-03-18 21:46:23.443065085 +0000 UTC Remote: 2024-03-18 21:46:23.321525589 +0000 UTC m=+40.816786231 (delta=121.539496ms)
	I0318 21:46:23.448387   52904 fix.go:200] guest clock delta is within tolerance: 121.539496ms
	I0318 21:46:23.448394   52904 start.go:83] releasing machines lock for "kubernetes-upgrade-397473", held for 9.518109302s
	I0318 21:46:23.449003   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:23.449292   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:46:23.453196   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.453585   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:23.453607   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.453917   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:23.454592   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:23.454789   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .DriverName
	I0318 21:46:23.454896   52904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:46:23.454948   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:23.455023   52904 ssh_runner.go:195] Run: cat /version.json
	I0318 21:46:23.455039   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHHostname
	I0318 21:46:23.458787   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.458823   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.459522   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:23.459558   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:23.459580   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.459593   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:23.459775   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:23.459943   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHPort
	I0318 21:46:23.460100   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:23.460193   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHKeyPath
	I0318 21:46:23.460258   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:23.460440   52904 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:46:23.460463   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetSSHUsername
	I0318 21:46:23.460601   52904 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/kubernetes-upgrade-397473/id_rsa Username:docker}
	I0318 21:46:23.568858   52904 ssh_runner.go:195] Run: systemctl --version
	I0318 21:46:23.595381   52904 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:46:23.771467   52904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:46:23.780852   52904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:46:23.780942   52904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:46:23.793432   52904 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 21:46:23.793458   52904 start.go:494] detecting cgroup driver to use...
	I0318 21:46:23.793526   52904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:46:23.815279   52904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:46:23.833383   52904 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:46:23.833450   52904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:46:23.850705   52904 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:46:23.867888   52904 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:46:24.039067   52904 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:46:24.322576   52904 docker.go:233] disabling docker service ...
	I0318 21:46:24.322651   52904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:46:24.363677   52904 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:46:24.565878   52904 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:46:25.181791   52904 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:46:25.850756   52904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:46:25.928731   52904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:46:26.036249   52904 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:46:26.036328   52904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.085435   52904 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:46:26.085527   52904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.123196   52904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.151232   52904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.178937   52904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:46:26.229713   52904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.264176   52904 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.305510   52904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:46:26.327444   52904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:46:26.347013   52904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:46:26.366105   52904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:46:26.713220   52904 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:46:26.808041   52561 out.go:204]   - Booting up control plane ...
	I0318 21:46:26.808151   52561 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 21:46:26.809148   52561 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 21:46:26.811200   52561 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 21:46:26.835672   52561 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 21:46:26.837791   52561 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 21:46:26.837870   52561 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 21:46:27.065230   52561 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 21:46:27.694226   52904 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:46:27.694312   52904 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:46:27.700787   52904 start.go:562] Will wait 60s for crictl version
	I0318 21:46:27.700864   52904 ssh_runner.go:195] Run: which crictl
	I0318 21:46:27.707127   52904 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:46:27.763183   52904 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:46:27.763280   52904 ssh_runner.go:195] Run: crio --version
	I0318 21:46:27.801047   52904 ssh_runner.go:195] Run: crio --version
	I0318 21:46:27.836928   52904 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:46:23.695681   52399 node_ready.go:53] node "calico-389288" has status "Ready":"False"
	I0318 21:46:25.696317   52399 node_ready.go:53] node "calico-389288" has status "Ready":"False"
	I0318 21:46:27.696644   52399 node_ready.go:53] node "calico-389288" has status "Ready":"False"
	I0318 21:46:27.630430   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:27.631091   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:27.631117   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:27.631029   54713 retry.go:31] will retry after 844.537711ms: waiting for machine to come up
	I0318 21:46:28.477067   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:28.477735   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:28.477758   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:28.477660   54713 retry.go:31] will retry after 1.106558762s: waiting for machine to come up
	I0318 21:46:29.586402   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:29.587107   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:29.587206   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:29.587172   54713 retry.go:31] will retry after 1.205927676s: waiting for machine to come up
	I0318 21:46:30.794486   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:30.795184   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:30.795213   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:30.795126   54713 retry.go:31] will retry after 1.481763506s: waiting for machine to come up
	I0318 21:46:32.277918   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | domain enable-default-cni-389288 has defined MAC address 52:54:00:c6:cc:88 in network mk-enable-default-cni-389288
	I0318 21:46:32.278487   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | unable to find current IP address of domain enable-default-cni-389288 in network mk-enable-default-cni-389288
	I0318 21:46:32.278515   54622 main.go:141] libmachine: (enable-default-cni-389288) DBG | I0318 21:46:32.278445   54713 retry.go:31] will retry after 2.041514802s: waiting for machine to come up
	I0318 21:46:27.838344   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) Calling .GetIP
	I0318 21:46:27.841396   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:27.841832   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:8a:7e", ip: ""} in network mk-kubernetes-upgrade-397473: {Iface:virbr3 ExpiryTime:2024-03-18 22:45:10 +0000 UTC Type:0 Mac:52:54:00:5f:8a:7e Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:kubernetes-upgrade-397473 Clientid:01:52:54:00:5f:8a:7e}
	I0318 21:46:27.841860   52904 main.go:141] libmachine: (kubernetes-upgrade-397473) DBG | domain kubernetes-upgrade-397473 has defined IP address 192.168.39.139 and MAC address 52:54:00:5f:8a:7e in network mk-kubernetes-upgrade-397473
	I0318 21:46:27.842121   52904 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:46:27.848487   52904 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-397473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:46:27.848662   52904 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:46:27.848723   52904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:46:27.918365   52904 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:46:27.918387   52904 crio.go:433] Images already preloaded, skipping extraction
	I0318 21:46:27.918432   52904 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:46:27.960562   52904 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:46:27.960591   52904 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:46:27.960601   52904 kubeadm.go:928] updating node { 192.168.39.139 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:46:27.960755   52904 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-397473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:46:27.960840   52904 ssh_runner.go:195] Run: crio config
	I0318 21:46:28.040553   52904 cni.go:84] Creating CNI manager for ""
	I0318 21:46:28.040583   52904 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:46:28.040602   52904 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:46:28.040630   52904 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.139 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-397473 NodeName:kubernetes-upgrade-397473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:46:28.040820   52904 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.139
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-397473"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.139
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.139"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:46:28.040892   52904 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:46:28.054603   52904 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:46:28.054670   52904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:46:28.070581   52904 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0318 21:46:28.099112   52904 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:46:28.130918   52904 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0318 21:46:28.157972   52904 ssh_runner.go:195] Run: grep 192.168.39.139	control-plane.minikube.internal$ /etc/hosts
	I0318 21:46:28.163234   52904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:46:28.394498   52904 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:46:28.420173   52904 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473 for IP: 192.168.39.139
	I0318 21:46:28.420197   52904 certs.go:194] generating shared ca certs ...
	I0318 21:46:28.420227   52904 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:46:28.420431   52904 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:46:28.420483   52904 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:46:28.420496   52904 certs.go:256] generating profile certs ...
	I0318 21:46:28.420600   52904 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/client.key
	I0318 21:46:28.420657   52904 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key.218bb9d1
	I0318 21:46:28.420711   52904 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.key
	I0318 21:46:28.420844   52904 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:46:28.420893   52904 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:46:28.420922   52904 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:46:28.420962   52904 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:46:28.420998   52904 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:46:28.421031   52904 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:46:28.421088   52904 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:46:28.422046   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:46:28.458574   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:46:28.497068   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:46:28.534810   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:46:28.577904   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 21:46:28.694907   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:46:28.884947   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:46:29.024175   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kubernetes-upgrade-397473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:46:29.101327   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:46:29.148691   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:46:29.187906   52904 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:46:29.228354   52904 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:46:29.250049   52904 ssh_runner.go:195] Run: openssl version
	I0318 21:46:29.258916   52904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:46:29.274974   52904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:46:29.282635   52904 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:46:29.282690   52904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:46:29.294982   52904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:46:29.311575   52904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:46:29.329664   52904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:46:29.335702   52904 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:46:29.335773   52904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:46:29.344525   52904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:46:29.482616   52904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:46:29.564042   52904 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:46:29.645182   52904 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:46:29.645288   52904 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:46:29.799587   52904 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:46:29.868836   52904 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:46:29.889117   52904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:46:29.947852   52904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:46:29.980889   52904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:46:29.996745   52904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:46:30.053838   52904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:46:30.112268   52904 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:46:30.167889   52904 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-397473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-397473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:46:30.168036   52904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:46:30.168125   52904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:46:30.300716   52904 cri.go:89] found id: "4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc"
	I0318 21:46:30.300743   52904 cri.go:89] found id: "237dbb526df1f0a49a8e097193033154b29359716bb726c74f681830e713ff9c"
	I0318 21:46:30.300748   52904 cri.go:89] found id: "6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa"
	I0318 21:46:30.300772   52904 cri.go:89] found id: "ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384"
	I0318 21:46:30.300778   52904 cri.go:89] found id: "5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35"
	I0318 21:46:30.300781   52904 cri.go:89] found id: "310921f2530d61d94fad534f195bf8b3ba98a6d2ff39bcd98e5326715460c760"
	I0318 21:46:30.300784   52904 cri.go:89] found id: "4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803"
	I0318 21:46:30.300788   52904 cri.go:89] found id: "5c8c6be6aa4b8f6bc8ba0b9cd2dbd4730b682a1ae50445c12f8674045ceed34e"
	I0318 21:46:30.300791   52904 cri.go:89] found id: ""
	I0318 21:46:30.300861   52904 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.829549947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710798411829522942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64701175-3227-43cc-9edb-11c637a411ef name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.830891327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49ade5bc-1b99-46e8-b829-68a1ab515efe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.830974421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49ade5bc-1b99-46e8-b829-68a1ab515efe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.831501256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3f71bac92a8186e1440d7e459ab57e28a81bd459b4903e1c9851f107054c93,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710798409205145074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04b88ee3da7984aa13bb46f98ec4ec4dee64d12072ac15ced0efe7b8d274387,PodSandboxId:01882ad0d8984e0a2be509fec2a65afa73c4d8afd7a7f12632cf1a4452eb7236,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408304539433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dn
s\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3179bb2a78790df05213923f26f9b648d17018565a29bd419f469233e9f965,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408215965974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1868292a57b31f4a2d2ad75113d391c6617456d11c14d20f28f6d7e9df625,PodSandboxId:6a898e95988f016a6a3aecdfa9edc1e090f2ea16a9f7bdac2eb6b3218d8a3f21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f7
09a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710798408302639937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08fbbc8339be04e349ac8f08111ae27fbb2ea808b6d36277f75248559a9a2a3,PodSandboxId:f2ad61daef03f42c16aade13d93f989392d611eeb46adebef3a2a0d7c8068c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Sta
te:CONTAINER_RUNNING,CreatedAt:1710798408197098276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff9baa6f9bf5c9b7d687ceeb57d56f591030f86ae5ddc6f9ffec7797f1e1b338,PodSandboxId:157b09c7e7648e8984372ecd45531bf049ddb7c62283e23b101c0754ac629be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:171079840
3420867757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befaed49c7eed51bd2ed2aa3ec4cd3d37027843d7b2e86e48faa3ca326df5c08,PodSandboxId:335d77613d1c5fef9afc88f6334ccf83468e89ce0f707bb2013208c92afe6273,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710798403424059368,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08d5b771ed3bac5c8541788d67665717c903b4b637b8fd1c356266ca2e0a691,PodSandboxId:6d2a8809947f8cc585f5c503f68af52f01d28b3250d0712558b2250e7be0e4ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710798403400912996,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4834e77809713c6c4e918424cd9b8671ca007205a5c46d5753bee08b79aa1f7,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710798397019285382,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f7458e204a3379e87294e8a81d9ff2e54f10394e3743a3709e4d5683ca615,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798390221846963,Labels:map[
string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa,PodSandboxId:751d2b680909c44299847c4227ec76461923d3ea91a2bd4ae215dffed63b032e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710798385353651922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc,PodSandboxId:1837aefba145bc659f36b769f4f56613c7ed0391d1e6bc5c8ddfd77a2532852f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798385982213558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dbb526df1f0a49a8e097193033154b29359716bb726c74f681830e713ff9c,PodSandboxId:4d402f63299cb216b887c733e9c2d56e94a13f4e186635827f10e38478a24
8d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710798385502209270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35,PodSandboxId:20a7bc3af4265e0e680279006c0d287dbf494bff6c98a37f2a2bf1bd55669243,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710798385199549968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384,PodSandboxId:712ecf58a55b8db41502fe58fb243af5ea1648d7f8adab9e17e0ff6ba3ffcc7d,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710798385247020351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803,PodSandboxId:a5a1eb7b27ada651df0ba8e5a5258e5e8c9611f9996e15cfb9b534b32a78018f,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710798384962269504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49ade5bc-1b99-46e8-b829-68a1ab515efe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.887918083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72d108f9-bf8d-4af4-8996-eb39a0a75f19 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.888037124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72d108f9-bf8d-4af4-8996-eb39a0a75f19 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.889734980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cbfe499-83d4-423e-84c4-24f44703a6c9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.890254656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710798411890223044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cbfe499-83d4-423e-84c4-24f44703a6c9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.891039285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cae570a8-96b8-4484-8f18-42fd59032a38 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.891321129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cae570a8-96b8-4484-8f18-42fd59032a38 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.892080622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3f71bac92a8186e1440d7e459ab57e28a81bd459b4903e1c9851f107054c93,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710798409205145074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04b88ee3da7984aa13bb46f98ec4ec4dee64d12072ac15ced0efe7b8d274387,PodSandboxId:01882ad0d8984e0a2be509fec2a65afa73c4d8afd7a7f12632cf1a4452eb7236,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408304539433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dn
s\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3179bb2a78790df05213923f26f9b648d17018565a29bd419f469233e9f965,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408215965974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1868292a57b31f4a2d2ad75113d391c6617456d11c14d20f28f6d7e9df625,PodSandboxId:6a898e95988f016a6a3aecdfa9edc1e090f2ea16a9f7bdac2eb6b3218d8a3f21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f7
09a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710798408302639937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08fbbc8339be04e349ac8f08111ae27fbb2ea808b6d36277f75248559a9a2a3,PodSandboxId:f2ad61daef03f42c16aade13d93f989392d611eeb46adebef3a2a0d7c8068c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Sta
te:CONTAINER_RUNNING,CreatedAt:1710798408197098276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff9baa6f9bf5c9b7d687ceeb57d56f591030f86ae5ddc6f9ffec7797f1e1b338,PodSandboxId:157b09c7e7648e8984372ecd45531bf049ddb7c62283e23b101c0754ac629be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:171079840
3420867757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befaed49c7eed51bd2ed2aa3ec4cd3d37027843d7b2e86e48faa3ca326df5c08,PodSandboxId:335d77613d1c5fef9afc88f6334ccf83468e89ce0f707bb2013208c92afe6273,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710798403424059368,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08d5b771ed3bac5c8541788d67665717c903b4b637b8fd1c356266ca2e0a691,PodSandboxId:6d2a8809947f8cc585f5c503f68af52f01d28b3250d0712558b2250e7be0e4ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710798403400912996,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4834e77809713c6c4e918424cd9b8671ca007205a5c46d5753bee08b79aa1f7,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710798397019285382,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f7458e204a3379e87294e8a81d9ff2e54f10394e3743a3709e4d5683ca615,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798390221846963,Labels:map[
string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa,PodSandboxId:751d2b680909c44299847c4227ec76461923d3ea91a2bd4ae215dffed63b032e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710798385353651922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc,PodSandboxId:1837aefba145bc659f36b769f4f56613c7ed0391d1e6bc5c8ddfd77a2532852f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798385982213558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dbb526df1f0a49a8e097193033154b29359716bb726c74f681830e713ff9c,PodSandboxId:4d402f63299cb216b887c733e9c2d56e94a13f4e186635827f10e38478a24
8d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710798385502209270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35,PodSandboxId:20a7bc3af4265e0e680279006c0d287dbf494bff6c98a37f2a2bf1bd55669243,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710798385199549968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384,PodSandboxId:712ecf58a55b8db41502fe58fb243af5ea1648d7f8adab9e17e0ff6ba3ffcc7d,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710798385247020351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803,PodSandboxId:a5a1eb7b27ada651df0ba8e5a5258e5e8c9611f9996e15cfb9b534b32a78018f,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710798384962269504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cae570a8-96b8-4484-8f18-42fd59032a38 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.948225553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=306eb281-e0a0-4067-b4c7-a35e5718b367 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.948331051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=306eb281-e0a0-4067-b4c7-a35e5718b367 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.949352195Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9f26370-1dfe-449c-9c9f-7508238dba13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.950007413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710798411949981233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9f26370-1dfe-449c-9c9f-7508238dba13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.950543855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a9586fe-2f17-4ecd-a076-a70a8557694f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.950624671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a9586fe-2f17-4ecd-a076-a70a8557694f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.951201611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3f71bac92a8186e1440d7e459ab57e28a81bd459b4903e1c9851f107054c93,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710798409205145074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04b88ee3da7984aa13bb46f98ec4ec4dee64d12072ac15ced0efe7b8d274387,PodSandboxId:01882ad0d8984e0a2be509fec2a65afa73c4d8afd7a7f12632cf1a4452eb7236,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408304539433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dn
s\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3179bb2a78790df05213923f26f9b648d17018565a29bd419f469233e9f965,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408215965974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1868292a57b31f4a2d2ad75113d391c6617456d11c14d20f28f6d7e9df625,PodSandboxId:6a898e95988f016a6a3aecdfa9edc1e090f2ea16a9f7bdac2eb6b3218d8a3f21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f7
09a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710798408302639937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08fbbc8339be04e349ac8f08111ae27fbb2ea808b6d36277f75248559a9a2a3,PodSandboxId:f2ad61daef03f42c16aade13d93f989392d611eeb46adebef3a2a0d7c8068c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Sta
te:CONTAINER_RUNNING,CreatedAt:1710798408197098276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff9baa6f9bf5c9b7d687ceeb57d56f591030f86ae5ddc6f9ffec7797f1e1b338,PodSandboxId:157b09c7e7648e8984372ecd45531bf049ddb7c62283e23b101c0754ac629be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:171079840
3420867757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befaed49c7eed51bd2ed2aa3ec4cd3d37027843d7b2e86e48faa3ca326df5c08,PodSandboxId:335d77613d1c5fef9afc88f6334ccf83468e89ce0f707bb2013208c92afe6273,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710798403424059368,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08d5b771ed3bac5c8541788d67665717c903b4b637b8fd1c356266ca2e0a691,PodSandboxId:6d2a8809947f8cc585f5c503f68af52f01d28b3250d0712558b2250e7be0e4ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710798403400912996,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4834e77809713c6c4e918424cd9b8671ca007205a5c46d5753bee08b79aa1f7,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710798397019285382,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f7458e204a3379e87294e8a81d9ff2e54f10394e3743a3709e4d5683ca615,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798390221846963,Labels:map[
string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa,PodSandboxId:751d2b680909c44299847c4227ec76461923d3ea91a2bd4ae215dffed63b032e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710798385353651922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc,PodSandboxId:1837aefba145bc659f36b769f4f56613c7ed0391d1e6bc5c8ddfd77a2532852f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798385982213558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dbb526df1f0a49a8e097193033154b29359716bb726c74f681830e713ff9c,PodSandboxId:4d402f63299cb216b887c733e9c2d56e94a13f4e186635827f10e38478a24
8d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710798385502209270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35,PodSandboxId:20a7bc3af4265e0e680279006c0d287dbf494bff6c98a37f2a2bf1bd55669243,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710798385199549968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384,PodSandboxId:712ecf58a55b8db41502fe58fb243af5ea1648d7f8adab9e17e0ff6ba3ffcc7d,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710798385247020351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803,PodSandboxId:a5a1eb7b27ada651df0ba8e5a5258e5e8c9611f9996e15cfb9b534b32a78018f,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710798384962269504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a9586fe-2f17-4ecd-a076-a70a8557694f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.994744174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78418771-9483-4496-b26a-d7c8ea9c84c9 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.994874577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78418771-9483-4496-b26a-d7c8ea9c84c9 name=/runtime.v1.RuntimeService/Version
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.996598224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96c054c7-7672-4e2d-a831-41fb6e3fb961 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.997393418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710798411997359898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96c054c7-7672-4e2d-a831-41fb6e3fb961 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.998309386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b1b76f3-8018-4cc2-9df9-a172e4888aa1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.998391547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b1b76f3-8018-4cc2-9df9-a172e4888aa1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 21:46:51 kubernetes-upgrade-397473 crio[2808]: time="2024-03-18 21:46:51.999043251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4f3f71bac92a8186e1440d7e459ab57e28a81bd459b4903e1c9851f107054c93,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710798409205145074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04b88ee3da7984aa13bb46f98ec4ec4dee64d12072ac15ced0efe7b8d274387,PodSandboxId:01882ad0d8984e0a2be509fec2a65afa73c4d8afd7a7f12632cf1a4452eb7236,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408304539433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dn
s\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3179bb2a78790df05213923f26f9b648d17018565a29bd419f469233e9f965,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710798408215965974,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1868292a57b31f4a2d2ad75113d391c6617456d11c14d20f28f6d7e9df625,PodSandboxId:6a898e95988f016a6a3aecdfa9edc1e090f2ea16a9f7bdac2eb6b3218d8a3f21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f7
09a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710798408302639937,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08fbbc8339be04e349ac8f08111ae27fbb2ea808b6d36277f75248559a9a2a3,PodSandboxId:f2ad61daef03f42c16aade13d93f989392d611eeb46adebef3a2a0d7c8068c11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Sta
te:CONTAINER_RUNNING,CreatedAt:1710798408197098276,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff9baa6f9bf5c9b7d687ceeb57d56f591030f86ae5ddc6f9ffec7797f1e1b338,PodSandboxId:157b09c7e7648e8984372ecd45531bf049ddb7c62283e23b101c0754ac629be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:171079840
3420867757,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befaed49c7eed51bd2ed2aa3ec4cd3d37027843d7b2e86e48faa3ca326df5c08,PodSandboxId:335d77613d1c5fef9afc88f6334ccf83468e89ce0f707bb2013208c92afe6273,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710798403424059368,Labels:map[str
ing]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08d5b771ed3bac5c8541788d67665717c903b4b637b8fd1c356266ca2e0a691,PodSandboxId:6d2a8809947f8cc585f5c503f68af52f01d28b3250d0712558b2250e7be0e4ec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710798403400912996,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4834e77809713c6c4e918424cd9b8671ca007205a5c46d5753bee08b79aa1f7,PodSandboxId:6caee40c6ce358ff78d74235d7fcf998a86aa8c80281644c172c1112756bcd1e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710798397019285382,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c0a596e372108b758c3f53dacd9143,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b8f7458e204a3379e87294e8a81d9ff2e54f10394e3743a3709e4d5683ca615,PodSandboxId:0ab2cebb8498b312f3ec933b0aaa5a8ca12cc5ea338134566a3b009ddceb9953,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798390221846963,Labels:map[
string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-vg6k2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9733a78d-fbd4-4873-a72b-b221ba732988,},Annotations:map[string]string{io.kubernetes.container.hash: 8287e311,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa,PodSandboxId:751d2b680909c44299847c4227ec76461923d3ea91a2bd4ae215dffed63b032e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710798385353651922,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9wrmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce31a364-ebb5-4786-8832-f01f165bc442,},Annotations:map[string]string{io.kubernetes.container.hash: 365bdd2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc,PodSandboxId:1837aefba145bc659f36b769f4f56613c7ed0391d1e6bc5c8ddfd77a2532852f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710798385982213558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rcs62,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0725a6c-e539-4d69-8b27-48ac3ef078b5,},Annotations:map[string]string{io.kubernetes.container.hash: d88e6a62,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:237dbb526df1f0a49a8e097193033154b29359716bb726c74f681830e713ff9c,PodSandboxId:4d402f63299cb216b887c733e9c2d56e94a13f4e186635827f10e38478a24
8d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710798385502209270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b203bac04d37b74af59b9a4f6068c7e1,},Annotations:map[string]string{io.kubernetes.container.hash: fba7b131,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35,PodSandboxId:20a7bc3af4265e0e680279006c0d287dbf494bff6c98a37f2a2bf1bd55669243,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710798385199549968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: 374d5959,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384,PodSandboxId:712ecf58a55b8db41502fe58fb243af5ea1648d7f8adab9e17e0ff6ba3ffcc7d,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710798385247020351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 658753e656da1fffa796f13d26ff36e0,},Annotations:map[string]string{io.kubernetes.container.hash: 88afc410,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803,PodSandboxId:a5a1eb7b27ada651df0ba8e5a5258e5e8c9611f9996e15cfb9b534b32a78018f,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710798384962269504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-397473,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84667172dd04302b8babc9336c170e8,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b1b76f3-8018-4cc2-9df9-a172e4888aa1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4f3f71bac92a8       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   2 seconds ago       Running             kube-controller-manager   3                   6caee40c6ce35       kube-controller-manager-kubernetes-upgrade-397473
	b04b88ee3da79       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   01882ad0d8984       coredns-76f75df574-rcs62
	ffb1868292a57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   6a898e95988f0       storage-provisioner
	6b3179bb2a787       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   0ab2cebb8498b       coredns-76f75df574-vg6k2
	f08fbbc8339be       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   3 seconds ago       Running             kube-proxy                2                   f2ad61daef03f       kube-proxy-9wrmh
	befaed49c7eed       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   8 seconds ago       Running             kube-scheduler            2                   335d77613d1c5       kube-scheduler-kubernetes-upgrade-397473
	ff9baa6f9bf5c       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   8 seconds ago       Running             etcd                      2                   157b09c7e7648       etcd-kubernetes-upgrade-397473
	f08d5b771ed3b       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   8 seconds ago       Running             kube-apiserver            2                   6d2a8809947f8       kube-apiserver-kubernetes-upgrade-397473
	f4834e7780971       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   15 seconds ago      Exited              kube-controller-manager   2                   6caee40c6ce35       kube-controller-manager-kubernetes-upgrade-397473
	3b8f7458e204a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Exited              coredns                   1                   0ab2cebb8498b       coredns-76f75df574-vg6k2
	4e9892edde7da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   1837aefba145b       coredns-76f75df574-rcs62
	237dbb526df1f       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   26 seconds ago      Exited              etcd                      1                   4d402f63299cb       etcd-kubernetes-upgrade-397473
	6875abd4778e3       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   26 seconds ago      Exited              kube-proxy                1                   751d2b680909c       kube-proxy-9wrmh
	ead08e82e5b36       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   26 seconds ago      Exited              kube-apiserver            1                   712ecf58a55b8       kube-apiserver-kubernetes-upgrade-397473
	5577534d60621       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   26 seconds ago      Exited              storage-provisioner       1                   20a7bc3af4265       storage-provisioner
	4822b09ec08e6       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   27 seconds ago      Exited              kube-scheduler            1                   a5a1eb7b27ada       kube-scheduler-kubernetes-upgrade-397473
	
	
	==> coredns [3b8f7458e204a3379e87294e8a81d9ff2e54f10394e3743a3709e4d5683ca615] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc] <==
	
	
	==> coredns [6b3179bb2a78790df05213923f26f9b648d17018565a29bd419f469233e9f965] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b04b88ee3da7984aa13bb46f98ec4ec4dee64d12072ac15ced0efe7b8d274387] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-397473
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-397473
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:45:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-397473
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 21:46:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 21:46:47 +0000   Mon, 18 Mar 2024 21:45:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 21:46:47 +0000   Mon, 18 Mar 2024 21:45:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 21:46:47 +0000   Mon, 18 Mar 2024 21:45:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 21:46:47 +0000   Mon, 18 Mar 2024 21:45:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.139
	  Hostname:    kubernetes-upgrade-397473
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 22c5de85a21e4a49bc00fae9750b9561
	  System UUID:                22c5de85-a21e-4a49-bc00-fae9750b9561
	  Boot ID:                    1db6321e-6ee5-41c6-bde5-fc68949fb92c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-rcs62                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 coredns-76f75df574-vg6k2                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     63s
	  kube-system                 etcd-kubernetes-upgrade-397473                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-kubernetes-upgrade-397473             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-397473    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-9wrmh                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-kubernetes-upgrade-397473             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-397473 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node kubernetes-upgrade-397473 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node kubernetes-upgrade-397473 status is now: NodeHasSufficientMemory
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           63s                node-controller  Node kubernetes-upgrade-397473 event: Registered Node kubernetes-upgrade-397473 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet          Node kubernetes-upgrade-397473 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet          Node kubernetes-upgrade-397473 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 10s)   kubelet          Node kubernetes-upgrade-397473 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000030] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.679063] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.061691] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073842] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.171214] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.161147] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.303455] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +5.453301] systemd-fstab-generator[731]: Ignoring "noauto" option for root device
	[  +0.081232] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.263411] systemd-fstab-generator[855]: Ignoring "noauto" option for root device
	[ +12.031859] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.171267] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +8.543291] kauditd_printk_skb: 15 callbacks suppressed
	[Mar18 21:46] systemd-fstab-generator[2028]: Ignoring "noauto" option for root device
	[  +0.099039] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.113067] systemd-fstab-generator[2042]: Ignoring "noauto" option for root device
	[  +0.836818] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.656941] systemd-fstab-generator[2498]: Ignoring "noauto" option for root device
	[  +0.866523] systemd-fstab-generator[2636]: Ignoring "noauto" option for root device
	[  +1.727986] systemd-fstab-generator[3011]: Ignoring "noauto" option for root device
	[  +0.802855] kauditd_printk_skb: 236 callbacks suppressed
	[  +8.046294] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.465652] systemd-fstab-generator[3666]: Ignoring "noauto" option for root device
	[  +5.810794] kauditd_printk_skb: 30 callbacks suppressed
	[  +1.155535] systemd-fstab-generator[4174]: Ignoring "noauto" option for root device
	
	
	==> etcd [237dbb526df1f0a49a8e097193033154b29359716bb726c74f681830e713ff9c] <==
	{"level":"warn","ts":"2024-03-18T21:46:26.530939Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-18T21:46:26.531053Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.139:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.139:2380","--initial-cluster=kubernetes-upgrade-397473=https://192.168.39.139:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.139:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.139:2380","--name=kubernetes-upgrade-397473","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-03-18T21:46:26.53117Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-03-18T21:46:26.531204Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-18T21:46:26.531224Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.139:2380"]}
	{"level":"info","ts":"2024-03-18T21:46:26.53126Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T21:46:26.554569Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"]}
	{"level":"info","ts":"2024-03-18T21:46:26.554946Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.10","git-sha":"0223ca52b","go-version":"go1.20.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-397473","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.139:2380"],"listen-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-03-18T21:46:26.696099Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"140.836942ms"}
	{"level":"info","ts":"2024-03-18T21:46:26.772923Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	
	
	==> etcd [ff9baa6f9bf5c9b7d687ceeb57d56f591030f86ae5ddc6f9ffec7797f1e1b338] <==
	{"level":"info","ts":"2024-03-18T21:46:43.776861Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T21:46:43.776869Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T21:46:43.777083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d switched to configuration voters=(4376887760750500653)"}
	{"level":"info","ts":"2024-03-18T21:46:43.777179Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","added-peer-id":"3cbdd43a8949db2d","added-peer-peer-urls":["https://192.168.39.139:2380"]}
	{"level":"info","ts":"2024-03-18T21:46:43.777307Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4af51893258ecb17","local-member-id":"3cbdd43a8949db2d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:46:43.777358Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T21:46:43.792812Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T21:46:43.795688Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3cbdd43a8949db2d","initial-advertise-peer-urls":["https://192.168.39.139:2380"],"listen-peer-urls":["https://192.168.39.139:2380"],"advertise-client-urls":["https://192.168.39.139:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.139:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T21:46:43.795756Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T21:46:43.795826Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-03-18T21:46:43.79585Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.139:2380"}
	{"level":"info","ts":"2024-03-18T21:46:45.639347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T21:46:45.639504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T21:46:45.639545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgPreVoteResp from 3cbdd43a8949db2d at term 2"}
	{"level":"info","ts":"2024-03-18T21:46:45.639593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T21:46:45.639613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d received MsgVoteResp from 3cbdd43a8949db2d at term 3"}
	{"level":"info","ts":"2024-03-18T21:46:45.639626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cbdd43a8949db2d became leader at term 3"}
	{"level":"info","ts":"2024-03-18T21:46:45.639636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cbdd43a8949db2d elected leader 3cbdd43a8949db2d at term 3"}
	{"level":"info","ts":"2024-03-18T21:46:45.650531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:46:45.652755Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3cbdd43a8949db2d","local-member-attributes":"{Name:kubernetes-upgrade-397473 ClientURLs:[https://192.168.39.139:2379]}","request-path":"/0/members/3cbdd43a8949db2d/attributes","cluster-id":"4af51893258ecb17","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T21:46:45.652942Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T21:46:45.653122Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T21:46:45.653155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T21:46:45.653868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.139:2379"}
	{"level":"info","ts":"2024-03-18T21:46:45.655262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:46:52 up 1 min,  0 users,  load average: 2.37, 0.77, 0.27
	Linux kubernetes-upgrade-397473 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384] <==
	I0318 21:46:26.252100       1 options.go:222] external host was not specified, using 192.168.39.139
	I0318 21:46:26.270743       1 server.go:148] Version: v1.29.0-rc.2
	I0318 21:46:26.271128       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [f08d5b771ed3bac5c8541788d67665717c903b4b637b8fd1c356266ca2e0a691] <==
	I0318 21:46:47.158628       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0318 21:46:47.158703       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 21:46:47.230242       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 21:46:47.230511       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 21:46:47.349764       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 21:46:47.350670       1 aggregator.go:165] initial CRD sync complete...
	I0318 21:46:47.350719       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 21:46:47.350729       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 21:46:47.360339       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 21:46:47.397023       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0318 21:46:47.401401       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 21:46:47.446896       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 21:46:47.453477       1 cache.go:39] Caches are synced for autoregister controller
	I0318 21:46:47.455115       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0318 21:46:47.455181       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0318 21:46:47.473720       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 21:46:47.473938       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 21:46:47.474003       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 21:46:48.153834       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 21:46:48.675812       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 21:46:49.149329       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 21:46:49.174102       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 21:46:49.264156       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 21:46:49.339627       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 21:46:49.359501       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4f3f71bac92a8186e1440d7e459ab57e28a81bd459b4903e1c9851f107054c93] <==
	I0318 21:46:50.329326       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 21:46:50.329362       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 21:46:50.329474       1 shared_informer.go:318] Caches are synced for tokens
	I0318 21:46:50.331183       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0318 21:46:50.331378       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0318 21:46:50.331514       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	E0318 21:46:50.333092       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0318 21:46:50.333145       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0318 21:46:50.334873       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0318 21:46:50.335234       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0318 21:46:50.335246       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0318 21:46:50.337043       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0318 21:46:50.337114       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0318 21:46:50.337174       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0318 21:46:50.337184       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0318 21:46:50.339996       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0318 21:46:50.340391       1 gc_controller.go:101] "Starting GC controller"
	I0318 21:46:50.340533       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0318 21:46:50.347518       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0318 21:46:50.348347       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0318 21:46:50.348383       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0318 21:46:50.348602       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0318 21:46:50.351928       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0318 21:46:50.352305       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0318 21:46:50.352336       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	
	
	==> kube-controller-manager [f4834e77809713c6c4e918424cd9b8671ca007205a5c46d5753bee08b79aa1f7] <==
	I0318 21:46:37.819874       1 serving.go:380] Generated self-signed cert in-memory
	I0318 21:46:38.074262       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0318 21:46:38.074328       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:46:38.076259       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 21:46:38.076562       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 21:46:38.085552       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 21:46:38.086109       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0318 21:46:48.092783       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-sys
tem-namespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa] <==
	
	
	==> kube-proxy [f08fbbc8339be04e349ac8f08111ae27fbb2ea808b6d36277f75248559a9a2a3] <==
	I0318 21:46:48.655091       1 server_others.go:72] "Using iptables proxy"
	I0318 21:46:48.686579       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.139"]
	I0318 21:46:48.792484       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 21:46:48.792617       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:46:48.792646       1 server_others.go:168] "Using iptables Proxier"
	I0318 21:46:48.797048       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:46:48.797663       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 21:46:48.797852       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:46:48.801059       1 config.go:188] "Starting service config controller"
	I0318 21:46:48.802556       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:46:48.803466       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:46:48.803597       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:46:48.805303       1 config.go:315] "Starting node config controller"
	I0318 21:46:48.806863       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:46:48.905582       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 21:46:48.905666       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:46:48.908775       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803] <==
	
	
	==> kube-scheduler [befaed49c7eed51bd2ed2aa3ec4cd3d37027843d7b2e86e48faa3ca326df5c08] <==
	I0318 21:46:43.937126       1 serving.go:380] Generated self-signed cert in-memory
	W0318 21:46:47.302045       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 21:46:47.302138       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 21:46:47.302153       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 21:46:47.302165       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 21:46:47.388274       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0318 21:46:47.388352       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:46:47.409252       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 21:46:47.409904       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 21:46:47.409970       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 21:46:47.414769       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:46:47.521537       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 21:46:43 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:43.371154    3673 scope.go:117] "RemoveContainer" containerID="ead08e82e5b3657f652e6348867477a6c6accc165d52c41fe4a72943ad116384"
	Mar 18 21:46:43 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:43.375025    3673 scope.go:117] "RemoveContainer" containerID="4822b09ec08e6d6f422eb90b4825baa1a1e01f17d2ab2a639e65c3662529e803"
	Mar 18 21:46:43 kubernetes-upgrade-397473 kubelet[3673]: E0318 21:46:43.455176    3673 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-397473?timeout=10s\": dial tcp 192.168.39.139:8443: connect: connection refused" interval="800ms"
	Mar 18 21:46:43 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:43.549313    3673 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-397473"
	Mar 18 21:46:43 kubernetes-upgrade-397473 kubelet[3673]: E0318 21:46:43.550382    3673 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.139:8443: connect: connection refused" node="kubernetes-upgrade-397473"
	Mar 18 21:46:44 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:44.352132    3673 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-397473"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.493571    3673 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-397473"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.493735    3673 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-397473"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.496644    3673 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.498016    3673 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.826188    3673 apiserver.go:52] "Watching apiserver"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.832199    3673 topology_manager.go:215] "Topology Admit Handler" podUID="edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.832532    3673 topology_manager.go:215] "Topology Admit Handler" podUID="a0725a6c-e539-4d69-8b27-48ac3ef078b5" podNamespace="kube-system" podName="coredns-76f75df574-rcs62"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.832684    3673 topology_manager.go:215] "Topology Admit Handler" podUID="9733a78d-fbd4-4873-a72b-b221ba732988" podNamespace="kube-system" podName="coredns-76f75df574-vg6k2"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.832812    3673 topology_manager.go:215] "Topology Admit Handler" podUID="ce31a364-ebb5-4786-8832-f01f165bc442" podNamespace="kube-system" podName="kube-proxy-9wrmh"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.841904    3673 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.935274    3673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9-tmp\") pod \"storage-provisioner\" (UID: \"edff9f6f-3ab2-4c4a-b5fa-a6ab4fb1d8d9\") " pod="kube-system/storage-provisioner"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.935630    3673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce31a364-ebb5-4786-8832-f01f165bc442-lib-modules\") pod \"kube-proxy-9wrmh\" (UID: \"ce31a364-ebb5-4786-8832-f01f165bc442\") " pod="kube-system/kube-proxy-9wrmh"
	Mar 18 21:46:47 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:47.935743    3673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce31a364-ebb5-4786-8832-f01f165bc442-xtables-lock\") pod \"kube-proxy-9wrmh\" (UID: \"ce31a364-ebb5-4786-8832-f01f165bc442\") " pod="kube-system/kube-proxy-9wrmh"
	Mar 18 21:46:48 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:48.133373    3673 scope.go:117] "RemoveContainer" containerID="6875abd4778e3f4e61528fff1edb81381f85098028e245129c1e541da16025aa"
	Mar 18 21:46:48 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:48.136638    3673 scope.go:117] "RemoveContainer" containerID="5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35"
	Mar 18 21:46:48 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:48.136968    3673 scope.go:117] "RemoveContainer" containerID="3b8f7458e204a3379e87294e8a81d9ff2e54f10394e3743a3709e4d5683ca615"
	Mar 18 21:46:48 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:48.143613    3673 scope.go:117] "RemoveContainer" containerID="4e9892edde7da9384be1ec31557ade66adf42f0116793d31e75eff148c19e6bc"
	Mar 18 21:46:49 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:49.123685    3673 scope.go:117] "RemoveContainer" containerID="310921f2530d61d94fad534f195bf8b3ba98a6d2ff39bcd98e5326715460c760"
	Mar 18 21:46:49 kubernetes-upgrade-397473 kubelet[3673]: I0318 21:46:49.123974    3673 scope.go:117] "RemoveContainer" containerID="f4834e77809713c6c4e918424cd9b8671ca007205a5c46d5753bee08b79aa1f7"
	
	
	==> storage-provisioner [5577534d60621be2caa2da8e33fa7338d0a087cfb42520584a239642e34a3f35] <==
	I0318 21:46:26.739061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [ffb1868292a57b31f4a2d2ad75113d391c6617456d11c14d20f28f6d7e9df625] <==
	I0318 21:46:48.605554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 21:46:48.654170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 21:46:48.659741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 21:46:48.755254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 21:46:48.757837       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-397473_75f6ee20-1d0d-45fc-a447-b7efb2a0acbc!
	I0318 21:46:48.757530       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa436b54-bb2e-436d-bae3-9eeefcb6f1ca", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-397473_75f6ee20-1d0d-45fc-a447-b7efb2a0acbc became leader
	I0318 21:46:48.858548       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-397473_75f6ee20-1d0d-45fc-a447-b7efb2a0acbc!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:46:51.335415   55017 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18421-5321/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-397473 -n kubernetes-upgrade-397473
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-397473 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-397473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-397473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-397473: (1.10224711s)
--- FAIL: TestKubernetesUpgrade (422.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (291.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-648232 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-648232 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m51.206529188s)

                                                
                                                
-- stdout --
	* [old-k8s-version-648232] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-648232" primary control-plane node in "old-k8s-version-648232" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:47:42.391535   58466 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:47:42.391884   58466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:47:42.391900   58466 out.go:304] Setting ErrFile to fd 2...
	I0318 21:47:42.391918   58466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:47:42.392239   58466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:47:42.393106   58466 out.go:298] Setting JSON to false
	I0318 21:47:42.394565   58466 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5406,"bootTime":1710793056,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:47:42.394651   58466 start.go:139] virtualization: kvm guest
	I0318 21:47:42.397001   58466 out.go:177] * [old-k8s-version-648232] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:47:42.398830   58466 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:47:42.398833   58466 notify.go:220] Checking for updates...
	I0318 21:47:42.400207   58466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:47:42.401587   58466 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:47:42.404090   58466 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:47:42.405398   58466 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:47:42.406816   58466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:47:42.408929   58466 config.go:182] Loaded profile config "bridge-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:47:42.409146   58466 config.go:182] Loaded profile config "enable-default-cni-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:47:42.409293   58466 config.go:182] Loaded profile config "flannel-389288": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:47:42.409445   58466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:47:42.462292   58466 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 21:47:42.463561   58466 start.go:297] selected driver: kvm2
	I0318 21:47:42.463578   58466 start.go:901] validating driver "kvm2" against <nil>
	I0318 21:47:42.463594   58466 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:47:42.464631   58466 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:47:42.464710   58466 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:47:42.488215   58466 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:47:42.488285   58466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 21:47:42.488588   58466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:47:42.488674   58466 cni.go:84] Creating CNI manager for ""
	I0318 21:47:42.488696   58466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:47:42.488712   58466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 21:47:42.488795   58466 start.go:340] cluster config:
	{Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:47:42.488956   58466 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:47:42.490656   58466 out.go:177] * Starting "old-k8s-version-648232" primary control-plane node in "old-k8s-version-648232" cluster
	I0318 21:47:42.491796   58466 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:47:42.491836   58466 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 21:47:42.491845   58466 cache.go:56] Caching tarball of preloaded images
	I0318 21:47:42.491924   58466 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 21:47:42.491938   58466 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 21:47:42.492065   58466 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:47:42.492094   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json: {Name:mka8575a7d907424014bf7651d4dc6b45967fdd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:47:42.492269   58466 start.go:360] acquireMachinesLock for old-k8s-version-648232: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:47:57.690580   58466 start.go:364] duration metric: took 15.198281662s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:47:57.690649   58466 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:47:57.690727   58466 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 21:47:57.692226   58466 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 21:47:57.692424   58466 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:47:57.692484   58466 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:47:57.712128   58466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0318 21:47:57.712607   58466 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:47:57.713226   58466 main.go:141] libmachine: Using API Version  1
	I0318 21:47:57.713252   58466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:47:57.713612   58466 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:47:57.713829   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:47:57.714010   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:47:57.714148   58466 start.go:159] libmachine.API.Create for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:47:57.714189   58466 client.go:168] LocalClient.Create starting
	I0318 21:47:57.714223   58466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 21:47:57.714261   58466 main.go:141] libmachine: Decoding PEM data...
	I0318 21:47:57.714289   58466 main.go:141] libmachine: Parsing certificate...
	I0318 21:47:57.714364   58466 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 21:47:57.714395   58466 main.go:141] libmachine: Decoding PEM data...
	I0318 21:47:57.714412   58466 main.go:141] libmachine: Parsing certificate...
	I0318 21:47:57.714441   58466 main.go:141] libmachine: Running pre-create checks...
	I0318 21:47:57.714455   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .PreCreateCheck
	I0318 21:47:57.714864   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:47:57.715254   58466 main.go:141] libmachine: Creating machine...
	I0318 21:47:57.715267   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .Create
	I0318 21:47:57.715393   58466 main.go:141] libmachine: (old-k8s-version-648232) Creating KVM machine...
	I0318 21:47:57.716742   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found existing default KVM network
	I0318 21:47:57.717945   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:57.717813   58598 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:a9:cb} reservation:<nil>}
	I0318 21:47:57.719006   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:57.718897   58598 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:39:7a:5d} reservation:<nil>}
	I0318 21:47:57.720044   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:57.719954   58598 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289340}
	I0318 21:47:57.720074   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | created network xml: 
	I0318 21:47:57.720084   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | <network>
	I0318 21:47:57.720095   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |   <name>mk-old-k8s-version-648232</name>
	I0318 21:47:57.720105   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |   <dns enable='no'/>
	I0318 21:47:57.720114   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |   
	I0318 21:47:57.720124   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0318 21:47:57.720133   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |     <dhcp>
	I0318 21:47:57.720143   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0318 21:47:57.720151   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |     </dhcp>
	I0318 21:47:57.720159   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |   </ip>
	I0318 21:47:57.720167   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG |   
	I0318 21:47:57.720175   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | </network>
	I0318 21:47:57.720184   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | 
	I0318 21:47:57.725631   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | trying to create private KVM network mk-old-k8s-version-648232 192.168.61.0/24...
	I0318 21:47:57.801610   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | private KVM network mk-old-k8s-version-648232 192.168.61.0/24 created
	I0318 21:47:57.801632   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:57.801559   58598 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:47:57.801658   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232 ...
	I0318 21:47:57.801677   58466 main.go:141] libmachine: (old-k8s-version-648232) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 21:47:57.801690   58466 main.go:141] libmachine: (old-k8s-version-648232) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 21:47:58.049275   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:58.049147   58598 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa...
	I0318 21:47:58.183501   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:58.183299   58598 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/old-k8s-version-648232.rawdisk...
	I0318 21:47:58.183537   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Writing magic tar header
	I0318 21:47:58.183555   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Writing SSH key tar header
	I0318 21:47:58.183569   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:58.183436   58598 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232 ...
	I0318 21:47:58.183588   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232 (perms=drwx------)
	I0318 21:47:58.183608   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 21:47:58.183625   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 21:47:58.183635   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232
	I0318 21:47:58.183650   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 21:47:58.183661   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:47:58.183681   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 21:47:58.183695   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 21:47:58.183709   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 21:47:58.183724   58466 main.go:141] libmachine: (old-k8s-version-648232) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 21:47:58.183735   58466 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:47:58.183750   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 21:47:58.183763   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home/jenkins
	I0318 21:47:58.183806   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Checking permissions on dir: /home
	I0318 21:47:58.183837   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Skipping /home - not owner
	I0318 21:47:58.185113   58466 main.go:141] libmachine: (old-k8s-version-648232) define libvirt domain using xml: 
	I0318 21:47:58.185146   58466 main.go:141] libmachine: (old-k8s-version-648232) <domain type='kvm'>
	I0318 21:47:58.185158   58466 main.go:141] libmachine: (old-k8s-version-648232)   <name>old-k8s-version-648232</name>
	I0318 21:47:58.185167   58466 main.go:141] libmachine: (old-k8s-version-648232)   <memory unit='MiB'>2200</memory>
	I0318 21:47:58.185178   58466 main.go:141] libmachine: (old-k8s-version-648232)   <vcpu>2</vcpu>
	I0318 21:47:58.185188   58466 main.go:141] libmachine: (old-k8s-version-648232)   <features>
	I0318 21:47:58.185199   58466 main.go:141] libmachine: (old-k8s-version-648232)     <acpi/>
	I0318 21:47:58.185210   58466 main.go:141] libmachine: (old-k8s-version-648232)     <apic/>
	I0318 21:47:58.185249   58466 main.go:141] libmachine: (old-k8s-version-648232)     <pae/>
	I0318 21:47:58.185274   58466 main.go:141] libmachine: (old-k8s-version-648232)     
	I0318 21:47:58.185284   58466 main.go:141] libmachine: (old-k8s-version-648232)   </features>
	I0318 21:47:58.185293   58466 main.go:141] libmachine: (old-k8s-version-648232)   <cpu mode='host-passthrough'>
	I0318 21:47:58.185301   58466 main.go:141] libmachine: (old-k8s-version-648232)   
	I0318 21:47:58.185309   58466 main.go:141] libmachine: (old-k8s-version-648232)   </cpu>
	I0318 21:47:58.185318   58466 main.go:141] libmachine: (old-k8s-version-648232)   <os>
	I0318 21:47:58.185333   58466 main.go:141] libmachine: (old-k8s-version-648232)     <type>hvm</type>
	I0318 21:47:58.185346   58466 main.go:141] libmachine: (old-k8s-version-648232)     <boot dev='cdrom'/>
	I0318 21:47:58.185352   58466 main.go:141] libmachine: (old-k8s-version-648232)     <boot dev='hd'/>
	I0318 21:47:58.185374   58466 main.go:141] libmachine: (old-k8s-version-648232)     <bootmenu enable='no'/>
	I0318 21:47:58.185392   58466 main.go:141] libmachine: (old-k8s-version-648232)   </os>
	I0318 21:47:58.185401   58466 main.go:141] libmachine: (old-k8s-version-648232)   <devices>
	I0318 21:47:58.185417   58466 main.go:141] libmachine: (old-k8s-version-648232)     <disk type='file' device='cdrom'>
	I0318 21:47:58.185462   58466 main.go:141] libmachine: (old-k8s-version-648232)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/boot2docker.iso'/>
	I0318 21:47:58.185488   58466 main.go:141] libmachine: (old-k8s-version-648232)       <target dev='hdc' bus='scsi'/>
	I0318 21:47:58.185503   58466 main.go:141] libmachine: (old-k8s-version-648232)       <readonly/>
	I0318 21:47:58.185514   58466 main.go:141] libmachine: (old-k8s-version-648232)     </disk>
	I0318 21:47:58.185526   58466 main.go:141] libmachine: (old-k8s-version-648232)     <disk type='file' device='disk'>
	I0318 21:47:58.185539   58466 main.go:141] libmachine: (old-k8s-version-648232)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 21:47:58.185554   58466 main.go:141] libmachine: (old-k8s-version-648232)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/old-k8s-version-648232.rawdisk'/>
	I0318 21:47:58.185566   58466 main.go:141] libmachine: (old-k8s-version-648232)       <target dev='hda' bus='virtio'/>
	I0318 21:47:58.185575   58466 main.go:141] libmachine: (old-k8s-version-648232)     </disk>
	I0318 21:47:58.185590   58466 main.go:141] libmachine: (old-k8s-version-648232)     <interface type='network'>
	I0318 21:47:58.185600   58466 main.go:141] libmachine: (old-k8s-version-648232)       <source network='mk-old-k8s-version-648232'/>
	I0318 21:47:58.185626   58466 main.go:141] libmachine: (old-k8s-version-648232)       <model type='virtio'/>
	I0318 21:47:58.185639   58466 main.go:141] libmachine: (old-k8s-version-648232)     </interface>
	I0318 21:47:58.185650   58466 main.go:141] libmachine: (old-k8s-version-648232)     <interface type='network'>
	I0318 21:47:58.185674   58466 main.go:141] libmachine: (old-k8s-version-648232)       <source network='default'/>
	I0318 21:47:58.185696   58466 main.go:141] libmachine: (old-k8s-version-648232)       <model type='virtio'/>
	I0318 21:47:58.185708   58466 main.go:141] libmachine: (old-k8s-version-648232)     </interface>
	I0318 21:47:58.185727   58466 main.go:141] libmachine: (old-k8s-version-648232)     <serial type='pty'>
	I0318 21:47:58.185736   58466 main.go:141] libmachine: (old-k8s-version-648232)       <target port='0'/>
	I0318 21:47:58.185747   58466 main.go:141] libmachine: (old-k8s-version-648232)     </serial>
	I0318 21:47:58.185756   58466 main.go:141] libmachine: (old-k8s-version-648232)     <console type='pty'>
	I0318 21:47:58.185764   58466 main.go:141] libmachine: (old-k8s-version-648232)       <target type='serial' port='0'/>
	I0318 21:47:58.185794   58466 main.go:141] libmachine: (old-k8s-version-648232)     </console>
	I0318 21:47:58.185807   58466 main.go:141] libmachine: (old-k8s-version-648232)     <rng model='virtio'>
	I0318 21:47:58.185818   58466 main.go:141] libmachine: (old-k8s-version-648232)       <backend model='random'>/dev/random</backend>
	I0318 21:47:58.185829   58466 main.go:141] libmachine: (old-k8s-version-648232)     </rng>
	I0318 21:47:58.185837   58466 main.go:141] libmachine: (old-k8s-version-648232)     
	I0318 21:47:58.185846   58466 main.go:141] libmachine: (old-k8s-version-648232)     
	I0318 21:47:58.185855   58466 main.go:141] libmachine: (old-k8s-version-648232)   </devices>
	I0318 21:47:58.185876   58466 main.go:141] libmachine: (old-k8s-version-648232) </domain>
	I0318 21:47:58.185890   58466 main.go:141] libmachine: (old-k8s-version-648232) 
	I0318 21:47:58.190395   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:e4:38:c3 in network default
	I0318 21:47:58.190944   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:47:58.190977   58466 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:47:58.191782   58466 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:47:58.192116   58466 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:47:58.192715   58466 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:47:58.193497   58466 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:47:59.517820   58466 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:47:59.518826   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:47:59.519347   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:47:59.519373   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:59.519316   58598 retry.go:31] will retry after 208.661299ms: waiting for machine to come up
	I0318 21:47:59.729899   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:47:59.730505   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:47:59.730529   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:47:59.730470   58598 retry.go:31] will retry after 322.354078ms: waiting for machine to come up
	I0318 21:48:00.054862   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:00.055493   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:00.055514   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:00.055443   58598 retry.go:31] will retry after 312.827609ms: waiting for machine to come up
	I0318 21:48:00.370075   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:00.370614   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:00.370639   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:00.370572   58598 retry.go:31] will retry after 383.83221ms: waiting for machine to come up
	I0318 21:48:00.756316   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:00.757036   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:00.757062   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:00.756957   58598 retry.go:31] will retry after 537.720269ms: waiting for machine to come up
	I0318 21:48:01.296760   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:01.297310   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:01.297333   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:01.297267   58598 retry.go:31] will retry after 670.027493ms: waiting for machine to come up
	I0318 21:48:01.969217   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:01.969755   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:01.969777   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:01.969709   58598 retry.go:31] will retry after 926.244615ms: waiting for machine to come up
	I0318 21:48:02.897772   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:02.898326   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:02.898357   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:02.898283   58598 retry.go:31] will retry after 1.204065088s: waiting for machine to come up
	I0318 21:48:04.103993   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:04.104396   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:04.104428   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:04.104357   58598 retry.go:31] will retry after 1.294093015s: waiting for machine to come up
	I0318 21:48:05.400206   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:05.400733   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:05.400759   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:05.400684   58598 retry.go:31] will retry after 2.259103732s: waiting for machine to come up
	I0318 21:48:07.662102   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:07.662600   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:07.662623   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:07.662561   58598 retry.go:31] will retry after 2.285357846s: waiting for machine to come up
	I0318 21:48:09.950172   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:09.950917   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:09.950938   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:09.950838   58598 retry.go:31] will retry after 2.938451992s: waiting for machine to come up
	I0318 21:48:12.890996   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:12.891545   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:12.891575   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:12.891503   58598 retry.go:31] will retry after 4.373814051s: waiting for machine to come up
	I0318 21:48:17.270133   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:17.270729   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:48:17.270757   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:48:17.270652   58598 retry.go:31] will retry after 4.440706029s: waiting for machine to come up
	I0318 21:48:21.713988   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:21.714550   58466 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:48:21.714577   58466 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:48:21.714591   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:21.714952   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232
	I0318 21:48:21.792542   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:48:21.792589   58466 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:48:21.792599   58466 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:48:21.795318   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:21.795677   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:21.795708   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:21.795959   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:48:21.795987   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:48:21.796019   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:48:21.796032   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:48:21.796048   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:48:21.925403   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:48:21.925675   58466 main.go:141] libmachine: (old-k8s-version-648232) KVM machine creation complete!
	I0318 21:48:21.926046   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:48:21.926596   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:21.926798   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:21.926982   58466 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 21:48:21.927004   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:48:21.928516   58466 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 21:48:21.928548   58466 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 21:48:21.928555   58466 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 21:48:21.928564   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:21.931041   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:21.931441   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:21.931466   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:21.931655   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:21.931821   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:21.932028   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:21.932198   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:21.932405   58466 main.go:141] libmachine: Using SSH client type: native
	I0318 21:48:21.932693   58466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:48:21.932711   58466 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 21:48:22.036724   58466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:48:22.036759   58466 main.go:141] libmachine: Detecting the provisioner...
	I0318 21:48:22.036771   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:22.039917   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.040234   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.040264   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.040452   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:22.040646   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.040898   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.041098   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:22.041271   58466 main.go:141] libmachine: Using SSH client type: native
	I0318 21:48:22.041509   58466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:48:22.041523   58466 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 21:48:22.154426   58466 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 21:48:22.154523   58466 main.go:141] libmachine: found compatible host: buildroot
	I0318 21:48:22.154538   58466 main.go:141] libmachine: Provisioning with buildroot...
	I0318 21:48:22.154550   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:48:22.154811   58466 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:48:22.154841   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:48:22.155102   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:22.157911   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.158338   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.158363   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.158516   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:22.158697   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.158878   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.159055   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:22.159225   58466 main.go:141] libmachine: Using SSH client type: native
	I0318 21:48:22.159451   58466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:48:22.159471   58466 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:48:22.286949   58466 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:48:22.286988   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:22.290449   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.290864   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.290892   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.291146   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:22.291400   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.291602   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.291783   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:22.291969   58466 main.go:141] libmachine: Using SSH client type: native
	I0318 21:48:22.292182   58466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:48:22.292210   58466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:48:22.409868   58466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:48:22.409900   58466 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:48:22.409956   58466 buildroot.go:174] setting up certificates
	I0318 21:48:22.409970   58466 provision.go:84] configureAuth start
	I0318 21:48:22.409986   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:48:22.410263   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:48:22.413136   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.413499   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.413521   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.413626   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:22.415949   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.416278   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.416298   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.416473   58466 provision.go:143] copyHostCerts
	I0318 21:48:22.416535   58466 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:48:22.416548   58466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:48:22.416622   58466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:48:22.416750   58466 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:48:22.416760   58466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:48:22.416789   58466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:48:22.416882   58466 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:48:22.416893   58466 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:48:22.416937   58466 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:48:22.417026   58466 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:48:22.538427   58466 provision.go:177] copyRemoteCerts
	I0318 21:48:22.538506   58466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:48:22.538535   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:22.541500   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.541846   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.541860   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.542183   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:22.542393   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.542535   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:22.542688   58466 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:48:22.630278   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:48:22.656847   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:48:22.684196   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:48:22.710857   58466 provision.go:87] duration metric: took 300.873348ms to configureAuth
	I0318 21:48:22.710889   58466 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:48:22.711107   58466 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:48:22.711192   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:22.714265   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.714710   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:22.714742   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:22.714963   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:22.715164   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.715340   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:22.715481   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:22.715650   58466 main.go:141] libmachine: Using SSH client type: native
	I0318 21:48:22.715837   58466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:48:22.715859   58466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:48:23.017710   58466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:48:23.017738   58466 main.go:141] libmachine: Checking connection to Docker...
	I0318 21:48:23.017749   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetURL
	I0318 21:48:23.019178   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using libvirt version 6000000
	I0318 21:48:23.021773   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.022222   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.022251   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.022510   58466 main.go:141] libmachine: Docker is up and running!
	I0318 21:48:23.022527   58466 main.go:141] libmachine: Reticulating splines...
	I0318 21:48:23.022535   58466 client.go:171] duration metric: took 25.308334056s to LocalClient.Create
	I0318 21:48:23.022562   58466 start.go:167] duration metric: took 25.308423199s to libmachine.API.Create "old-k8s-version-648232"
	I0318 21:48:23.022586   58466 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:48:23.022599   58466 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:48:23.022617   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:23.022820   58466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:48:23.022842   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:23.025236   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.025529   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.025552   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.025741   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:23.025937   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:23.026089   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:23.026216   58466 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:48:23.116602   58466 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:48:23.121775   58466 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:48:23.121796   58466 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:48:23.121859   58466 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:48:23.121969   58466 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:48:23.122102   58466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:48:23.133952   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:48:23.163571   58466 start.go:296] duration metric: took 140.970793ms for postStartSetup
	I0318 21:48:23.163616   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:48:23.164257   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:48:23.166905   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.167206   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.167230   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.167492   58466 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:48:23.167710   58466 start.go:128] duration metric: took 25.476972344s to createHost
	I0318 21:48:23.167733   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:23.170065   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.170462   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.170493   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.170600   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:23.170813   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:23.170966   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:23.171106   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:23.171264   58466 main.go:141] libmachine: Using SSH client type: native
	I0318 21:48:23.171437   58466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:48:23.171453   58466 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 21:48:23.278476   58466 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710798503.253768046
	
	I0318 21:48:23.278501   58466 fix.go:216] guest clock: 1710798503.253768046
	I0318 21:48:23.278511   58466 fix.go:229] Guest: 2024-03-18 21:48:23.253768046 +0000 UTC Remote: 2024-03-18 21:48:23.167722741 +0000 UTC m=+40.833788395 (delta=86.045305ms)
	I0318 21:48:23.278536   58466 fix.go:200] guest clock delta is within tolerance: 86.045305ms
	I0318 21:48:23.278543   58466 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 25.587924369s
	I0318 21:48:23.278569   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:23.278834   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:48:23.281524   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.281898   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.281930   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.282105   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:23.282644   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:23.282854   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:48:23.282959   58466 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:48:23.283003   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:23.283067   58466 ssh_runner.go:195] Run: cat /version.json
	I0318 21:48:23.283103   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:48:23.285753   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.286098   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.286127   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.286151   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.286253   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:23.286448   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:23.286580   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:23.286599   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:23.286615   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:23.286751   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:48:23.286795   58466 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:48:23.286909   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:48:23.287073   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:48:23.287219   58466 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:48:23.366730   58466 ssh_runner.go:195] Run: systemctl --version
	I0318 21:48:23.391256   58466 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:48:23.563842   58466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:48:23.571600   58466 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:48:23.571693   58466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:48:23.594568   58466 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:48:23.594595   58466 start.go:494] detecting cgroup driver to use...
	I0318 21:48:23.594653   58466 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:48:23.615753   58466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:48:23.633407   58466 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:48:23.633498   58466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:48:23.654509   58466 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:48:23.672161   58466 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:48:23.799243   58466 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:48:23.962640   58466 docker.go:233] disabling docker service ...
	I0318 21:48:23.962704   58466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:48:23.979967   58466 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:48:23.994899   58466 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:48:24.157842   58466 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:48:24.301103   58466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:48:24.318049   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:48:24.342329   58466 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:48:24.342386   58466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:48:24.356720   58466 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:48:24.356812   58466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:48:24.371308   58466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:48:24.384494   58466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:48:24.397951   58466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:48:24.412154   58466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:48:24.423912   58466 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:48:24.423964   58466 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:48:24.441907   58466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:48:24.455914   58466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:48:24.592148   58466 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:48:24.766694   58466 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:48:24.766772   58466 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:48:24.772682   58466 start.go:562] Will wait 60s for crictl version
	I0318 21:48:24.772755   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:24.777073   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:48:24.826021   58466 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:48:24.826104   58466 ssh_runner.go:195] Run: crio --version
	I0318 21:48:24.859558   58466 ssh_runner.go:195] Run: crio --version
	I0318 21:48:24.893240   58466 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:48:24.894312   58466 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:48:24.897178   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:24.897516   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:48:15 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:48:24.897542   58466 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:48:24.897802   58466 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:48:24.902481   58466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:48:24.916269   58466 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:48:24.916398   58466 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:48:24.916477   58466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:48:24.954097   58466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:48:24.954169   58466 ssh_runner.go:195] Run: which lz4
	I0318 21:48:24.958912   58466 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 21:48:24.964035   58466 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:48:24.964066   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:48:27.031683   58466 crio.go:462] duration metric: took 2.072799124s to copy over tarball
	I0318 21:48:27.031745   58466 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:48:30.435542   58466 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403755814s)
	I0318 21:48:30.435575   58466 crio.go:469] duration metric: took 3.403867703s to extract the tarball
	I0318 21:48:30.435585   58466 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:48:30.489126   58466 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:48:30.605889   58466 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:48:30.605918   58466 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:48:30.605990   58466 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:48:30.606242   58466 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:48:30.606280   58466 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:48:30.606326   58466 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:48:30.606351   58466 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:48:30.606499   58466 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:48:30.606530   58466 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:48:30.606504   58466 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:48:30.607765   58466 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:48:30.607764   58466 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:48:30.607798   58466 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:48:30.607886   58466 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:48:30.607891   58466 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:48:30.607904   58466 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:48:30.607938   58466 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:48:30.608236   58466 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:48:30.764581   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:48:30.764892   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:48:30.772817   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:48:30.774691   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:48:30.775566   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:48:30.785278   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:48:30.809057   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:48:30.990063   58466 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:48:30.990119   58466 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:48:30.990129   58466 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:48:30.990077   58466 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:48:30.990164   58466 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:48:30.990179   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:30.990205   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:30.990181   58466 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:48:30.990256   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:31.011716   58466 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:48:31.011751   58466 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:48:31.011781   58466 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:48:31.011808   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:31.011821   58466 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:48:31.011828   58466 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:48:31.011865   58466 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:48:31.011878   58466 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:48:31.011890   58466 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:48:31.011916   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:31.011926   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:31.011868   58466 ssh_runner.go:195] Run: which crictl
	I0318 21:48:31.011974   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:48:31.011945   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:48:31.012014   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:48:31.039081   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:48:31.086452   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:48:31.110460   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:48:31.110544   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:48:31.110548   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:48:31.110657   58466 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:48:31.110676   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:48:31.175683   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:48:31.228976   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:48:31.229014   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:48:31.229052   58466 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:48:31.577349   58466 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:48:31.748945   58466 cache_images.go:92] duration metric: took 1.143007085s to LoadCachedImages
	W0318 21:48:31.749037   58466 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0318 21:48:31.749053   58466 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:48:31.749184   58466 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:48:31.749255   58466 ssh_runner.go:195] Run: crio config
	I0318 21:48:31.842702   58466 cni.go:84] Creating CNI manager for ""
	I0318 21:48:31.842725   58466 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:48:31.842737   58466 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:48:31.842763   58466 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:48:31.842971   58466 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:48:31.843041   58466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:48:31.861472   58466 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:48:31.861544   58466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:48:31.876597   58466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:48:31.906796   58466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:48:31.932196   58466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:48:31.961165   58466 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:48:31.966002   58466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:48:31.993476   58466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:48:32.234838   58466 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:48:32.263741   58466 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:48:32.263765   58466 certs.go:194] generating shared ca certs ...
	I0318 21:48:32.263785   58466 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.263931   58466 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:48:32.263981   58466 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:48:32.263997   58466 certs.go:256] generating profile certs ...
	I0318 21:48:32.264059   58466 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:48:32.264087   58466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.crt with IP's: []
	I0318 21:48:32.330626   58466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.crt ...
	I0318 21:48:32.330658   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.crt: {Name:mk864c029de97c78c2e38ad05cf233d0738bc0f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.330867   58466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key ...
	I0318 21:48:32.330892   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key: {Name:mk5e6c2014944df0133b07219a13ee241677e31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.331024   58466 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:48:32.331054   58466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt.a3f2b5e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.111]
	I0318 21:48:32.529521   58466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt.a3f2b5e4 ...
	I0318 21:48:32.529551   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt.a3f2b5e4: {Name:mk56e9721d0c1e9ac3aae3ed093bf1c7767fbfaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.529774   58466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4 ...
	I0318 21:48:32.529793   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4: {Name:mk621c7929fdda5c3da0533c579f7190f1bbd654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.529935   58466 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt.a3f2b5e4 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt
	I0318 21:48:32.530030   58466 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key
	I0318 21:48:32.530105   58466 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:48:32.530122   58466 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt with IP's: []
	I0318 21:48:32.648139   58466 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt ...
	I0318 21:48:32.648175   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt: {Name:mka52ef5f1d6e438b0cf23b52552df2982d4eed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.648367   58466 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key ...
	I0318 21:48:32.648384   58466 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key: {Name:mk33b04a91125e6ddf78b1c579de2d00b6541a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:48:32.648652   58466 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:48:32.648700   58466 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:48:32.648710   58466 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:48:32.648744   58466 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:48:32.648830   58466 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:48:32.648859   58466 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:48:32.648957   58466 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:48:32.649798   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:48:32.706688   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:48:32.789561   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:48:32.847261   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:48:32.881462   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:48:32.924542   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:48:32.959330   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:48:32.989335   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:48:33.033168   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:48:33.076933   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:48:33.119387   58466 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:48:33.162336   58466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:48:33.197557   58466 ssh_runner.go:195] Run: openssl version
	I0318 21:48:33.207217   58466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:48:33.229887   58466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:48:33.243197   58466 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:48:33.243260   58466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:48:33.252325   58466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:48:33.270859   58466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:48:33.289370   58466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:48:33.297145   58466 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:48:33.297209   58466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:48:33.306367   58466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:48:33.325535   58466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:48:33.356245   58466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:48:33.364694   58466 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:48:33.364768   58466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:48:33.373973   58466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:48:33.399011   58466 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:48:33.407904   58466 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 21:48:33.407950   58466 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:48:33.408013   58466 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:48:33.408054   58466 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:48:33.469135   58466 cri.go:89] found id: ""
	I0318 21:48:33.469199   58466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 21:48:33.491083   58466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:48:33.509197   58466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:48:33.528515   58466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:48:33.528532   58466 kubeadm.go:156] found existing configuration files:
	
	I0318 21:48:33.528592   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:48:33.540801   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:48:33.540868   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:48:33.552870   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:48:33.568212   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:48:33.568276   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:48:33.585308   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:48:33.601535   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:48:33.601599   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:48:33.619732   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:48:33.636520   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:48:33.636581   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:48:33.657467   58466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 21:48:33.902225   58466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 21:48:33.902461   58466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 21:48:34.172775   58466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 21:48:34.172942   58466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 21:48:34.173061   58466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 21:48:34.507775   58466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 21:48:34.509613   58466 out.go:204]   - Generating certificates and keys ...
	I0318 21:48:34.509706   58466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 21:48:34.509788   58466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 21:48:34.791559   58466 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 21:48:34.946406   58466 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 21:48:35.445388   58466 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 21:48:35.638179   58466 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 21:48:35.857315   58466 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 21:48:35.857632   58466 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-648232] and IPs [192.168.61.111 127.0.0.1 ::1]
	I0318 21:48:36.105935   58466 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 21:48:36.106336   58466 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-648232] and IPs [192.168.61.111 127.0.0.1 ::1]
	I0318 21:48:36.244234   58466 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 21:48:36.484041   58466 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 21:48:36.848839   58466 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 21:48:36.849295   58466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 21:48:37.174767   58466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 21:48:37.417225   58466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 21:48:37.578055   58466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 21:48:37.872893   58466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 21:48:37.896975   58466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 21:48:37.897710   58466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 21:48:37.897862   58466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 21:48:38.031932   58466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 21:48:38.033508   58466 out.go:204]   - Booting up control plane ...
	I0318 21:48:38.033656   58466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 21:48:38.038136   58466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 21:48:38.039818   58466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 21:48:38.040623   58466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 21:48:38.044581   58466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 21:49:18.041908   58466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 21:49:18.044126   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:49:18.044623   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:49:23.045643   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:49:23.045887   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:49:33.046559   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:49:33.046748   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:49:53.048021   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:49:53.048305   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:50:33.047860   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:50:33.048120   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:50:33.048147   58466 kubeadm.go:309] 
	I0318 21:50:33.048200   58466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 21:50:33.048258   58466 kubeadm.go:309] 		timed out waiting for the condition
	I0318 21:50:33.048283   58466 kubeadm.go:309] 
	I0318 21:50:33.048333   58466 kubeadm.go:309] 	This error is likely caused by:
	I0318 21:50:33.048396   58466 kubeadm.go:309] 		- The kubelet is not running
	I0318 21:50:33.048547   58466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 21:50:33.048559   58466 kubeadm.go:309] 
	I0318 21:50:33.048637   58466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 21:50:33.048673   58466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 21:50:33.048703   58466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 21:50:33.048709   58466 kubeadm.go:309] 
	I0318 21:50:33.048796   58466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 21:50:33.048935   58466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 21:50:33.048955   58466 kubeadm.go:309] 
	I0318 21:50:33.049109   58466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 21:50:33.049190   58466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 21:50:33.049263   58466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 21:50:33.049337   58466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 21:50:33.049345   58466 kubeadm.go:309] 
	I0318 21:50:33.050406   58466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 21:50:33.050477   58466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 21:50:33.050573   58466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 21:50:33.050780   58466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-648232] and IPs [192.168.61.111 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-648232] and IPs [192.168.61.111 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-648232] and IPs [192.168.61.111 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-648232] and IPs [192.168.61.111 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 21:50:33.050849   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 21:50:36.047140   58466 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.996260999s)
	I0318 21:50:36.047257   58466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:50:36.066262   58466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:50:36.078389   58466 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:50:36.078409   58466 kubeadm.go:156] found existing configuration files:
	
	I0318 21:50:36.078456   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:50:36.089777   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:50:36.089829   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:50:36.101378   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:50:36.112566   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:50:36.112626   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:50:36.123358   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:50:36.134043   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:50:36.134102   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:50:36.145145   58466 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:50:36.155559   58466 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:50:36.155620   58466 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:50:36.166968   58466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 21:50:36.246787   58466 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 21:50:36.246864   58466 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 21:50:36.404969   58466 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 21:50:36.405124   58466 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 21:50:36.405249   58466 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 21:50:36.631949   58466 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 21:50:36.634171   58466 out.go:204]   - Generating certificates and keys ...
	I0318 21:50:36.634281   58466 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 21:50:36.634364   58466 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 21:50:36.634482   58466 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 21:50:36.634570   58466 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 21:50:36.634659   58466 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 21:50:36.634765   58466 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 21:50:36.634855   58466 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 21:50:36.635267   58466 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 21:50:36.635670   58466 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 21:50:36.636003   58466 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 21:50:36.636175   58466 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 21:50:36.636278   58466 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 21:50:36.898554   58466 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 21:50:37.267462   58466 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 21:50:37.611755   58466 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 21:50:37.708788   58466 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 21:50:37.725807   58466 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 21:50:37.727268   58466 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 21:50:37.727312   58466 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 21:50:37.881732   58466 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 21:50:37.883657   58466 out.go:204]   - Booting up control plane ...
	I0318 21:50:37.883770   58466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 21:50:37.891459   58466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 21:50:37.892380   58466 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 21:50:37.893307   58466 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 21:50:37.895463   58466 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 21:51:17.897693   58466 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 21:51:17.897786   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:51:17.897982   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:51:22.898448   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:51:22.898653   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:51:32.899365   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:51:32.899567   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:51:52.900699   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:51:52.900995   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:52:32.900515   58466 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 21:52:32.900770   58466 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 21:52:32.900792   58466 kubeadm.go:309] 
	I0318 21:52:32.900854   58466 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 21:52:32.900930   58466 kubeadm.go:309] 		timed out waiting for the condition
	I0318 21:52:32.900941   58466 kubeadm.go:309] 
	I0318 21:52:32.900998   58466 kubeadm.go:309] 	This error is likely caused by:
	I0318 21:52:32.901051   58466 kubeadm.go:309] 		- The kubelet is not running
	I0318 21:52:32.901213   58466 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 21:52:32.901239   58466 kubeadm.go:309] 
	I0318 21:52:32.901363   58466 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 21:52:32.901414   58466 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 21:52:32.901453   58466 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 21:52:32.901460   58466 kubeadm.go:309] 
	I0318 21:52:32.901545   58466 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 21:52:32.901614   58466 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 21:52:32.901622   58466 kubeadm.go:309] 
	I0318 21:52:32.901716   58466 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 21:52:32.901831   58466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 21:52:32.901928   58466 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 21:52:32.902030   58466 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 21:52:32.902044   58466 kubeadm.go:309] 
	I0318 21:52:32.902959   58466 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 21:52:32.903074   58466 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 21:52:32.903154   58466 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 21:52:32.903232   58466 kubeadm.go:393] duration metric: took 3m59.495283933s to StartCluster
	I0318 21:52:32.903281   58466 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:52:32.903340   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:52:32.952461   58466 cri.go:89] found id: ""
	I0318 21:52:32.952485   58466 logs.go:276] 0 containers: []
	W0318 21:52:32.952493   58466 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:52:32.952504   58466 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:52:32.952556   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:52:32.990960   58466 cri.go:89] found id: ""
	I0318 21:52:32.990983   58466 logs.go:276] 0 containers: []
	W0318 21:52:32.990992   58466 logs.go:278] No container was found matching "etcd"
	I0318 21:52:32.990998   58466 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:52:32.991048   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:52:33.031616   58466 cri.go:89] found id: ""
	I0318 21:52:33.031643   58466 logs.go:276] 0 containers: []
	W0318 21:52:33.031651   58466 logs.go:278] No container was found matching "coredns"
	I0318 21:52:33.031656   58466 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:52:33.031704   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:52:33.068913   58466 cri.go:89] found id: ""
	I0318 21:52:33.068932   58466 logs.go:276] 0 containers: []
	W0318 21:52:33.068940   58466 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:52:33.068945   58466 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:52:33.068989   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:52:33.103460   58466 cri.go:89] found id: ""
	I0318 21:52:33.103484   58466 logs.go:276] 0 containers: []
	W0318 21:52:33.103491   58466 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:52:33.103497   58466 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:52:33.103540   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:52:33.139549   58466 cri.go:89] found id: ""
	I0318 21:52:33.139580   58466 logs.go:276] 0 containers: []
	W0318 21:52:33.139591   58466 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:52:33.139599   58466 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:52:33.139655   58466 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:52:33.180437   58466 cri.go:89] found id: ""
	I0318 21:52:33.180466   58466 logs.go:276] 0 containers: []
	W0318 21:52:33.180476   58466 logs.go:278] No container was found matching "kindnet"
	I0318 21:52:33.180486   58466 logs.go:123] Gathering logs for container status ...
	I0318 21:52:33.180498   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 21:52:33.232995   58466 logs.go:123] Gathering logs for kubelet ...
	I0318 21:52:33.233031   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:52:33.286878   58466 logs.go:123] Gathering logs for dmesg ...
	I0318 21:52:33.286912   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:52:33.301491   58466 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:52:33.301517   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 21:52:33.426544   58466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 21:52:33.426565   58466 logs.go:123] Gathering logs for CRI-O ...
	I0318 21:52:33.426578   58466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0318 21:52:33.526085   58466 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 21:52:33.526131   58466 out.go:239] * 
	* 
	W0318 21:52:33.526190   58466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 21:52:33.526224   58466 out.go:239] * 
	* 
	W0318 21:52:33.527054   58466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 21:52:33.529829   58466 out.go:177] 
	W0318 21:52:33.530938   58466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 21:52:33.530981   58466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 21:52:33.530997   58466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 21:52:33.532169   58466 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-648232 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 6 (239.30988ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:52:33.807525   64670 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-648232" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (291.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-660775 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-660775 --alsologtostderr -v=3: exit status 82 (2m0.530010719s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-660775"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:50:58.728251   64207 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:50:58.728502   64207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:50:58.728513   64207 out.go:304] Setting ErrFile to fd 2...
	I0318 21:50:58.728518   64207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:50:58.728719   64207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:50:58.729009   64207 out.go:298] Setting JSON to false
	I0318 21:50:58.729097   64207 mustload.go:65] Loading cluster: default-k8s-diff-port-660775
	I0318 21:50:58.729448   64207 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:50:58.729523   64207 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:50:58.729716   64207 mustload.go:65] Loading cluster: default-k8s-diff-port-660775
	I0318 21:50:58.729845   64207 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:50:58.729892   64207 stop.go:39] StopHost: default-k8s-diff-port-660775
	I0318 21:50:58.730293   64207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:50:58.730351   64207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:50:58.745385   64207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35177
	I0318 21:50:58.745884   64207 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:50:58.746478   64207 main.go:141] libmachine: Using API Version  1
	I0318 21:50:58.746509   64207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:50:58.746824   64207 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:50:58.749326   64207 out.go:177] * Stopping node "default-k8s-diff-port-660775"  ...
	I0318 21:50:58.750888   64207 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 21:50:58.750920   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:50:58.751162   64207 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 21:50:58.751184   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:50:58.753917   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:50:58.754318   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:50:01 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:50:58.754356   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:50:58.754611   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:50:58.754795   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:50:58.754971   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:50:58.755142   64207 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:50:58.886782   64207 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 21:50:58.945046   64207 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 21:50:59.017660   64207 main.go:141] libmachine: Stopping "default-k8s-diff-port-660775"...
	I0318 21:50:59.017680   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:50:59.019384   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Stop
	I0318 21:50:59.022988   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 0/120
	I0318 21:51:00.024340   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 1/120
	I0318 21:51:01.025630   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 2/120
	I0318 21:51:02.026827   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 3/120
	I0318 21:51:03.028250   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 4/120
	I0318 21:51:04.029838   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 5/120
	I0318 21:51:05.031194   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 6/120
	I0318 21:51:06.032534   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 7/120
	I0318 21:51:07.033876   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 8/120
	I0318 21:51:08.035508   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 9/120
	I0318 21:51:09.037071   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 10/120
	I0318 21:51:10.039254   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 11/120
	I0318 21:51:11.040747   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 12/120
	I0318 21:51:12.042059   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 13/120
	I0318 21:51:13.044184   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 14/120
	I0318 21:51:14.045923   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 15/120
	I0318 21:51:15.047221   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 16/120
	I0318 21:51:16.048631   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 17/120
	I0318 21:51:17.049934   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 18/120
	I0318 21:51:18.051372   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 19/120
	I0318 21:51:19.053371   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 20/120
	I0318 21:51:20.055378   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 21/120
	I0318 21:51:21.056671   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 22/120
	I0318 21:51:22.057936   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 23/120
	I0318 21:51:23.059758   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 24/120
	I0318 21:51:24.061537   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 25/120
	I0318 21:51:25.063300   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 26/120
	I0318 21:51:26.064539   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 27/120
	I0318 21:51:27.065841   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 28/120
	I0318 21:51:28.067193   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 29/120
	I0318 21:51:29.069088   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 30/120
	I0318 21:51:30.070497   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 31/120
	I0318 21:51:31.071778   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 32/120
	I0318 21:51:32.072968   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 33/120
	I0318 21:51:33.074365   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 34/120
	I0318 21:51:34.076317   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 35/120
	I0318 21:51:35.077549   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 36/120
	I0318 21:51:36.078816   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 37/120
	I0318 21:51:37.080053   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 38/120
	I0318 21:51:38.081370   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 39/120
	I0318 21:51:39.083214   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 40/120
	I0318 21:51:40.084419   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 41/120
	I0318 21:51:41.085691   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 42/120
	I0318 21:51:42.087183   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 43/120
	I0318 21:51:43.088469   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 44/120
	I0318 21:51:44.090398   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 45/120
	I0318 21:51:45.091643   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 46/120
	I0318 21:51:46.092807   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 47/120
	I0318 21:51:47.094121   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 48/120
	I0318 21:51:48.095387   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 49/120
	I0318 21:51:49.097491   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 50/120
	I0318 21:51:50.098853   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 51/120
	I0318 21:51:51.100106   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 52/120
	I0318 21:51:52.101454   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 53/120
	I0318 21:51:53.102746   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 54/120
	I0318 21:51:54.104381   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 55/120
	I0318 21:51:55.105690   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 56/120
	I0318 21:51:56.107533   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 57/120
	I0318 21:51:57.108847   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 58/120
	I0318 21:51:58.110175   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 59/120
	I0318 21:51:59.112330   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 60/120
	I0318 21:52:00.113580   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 61/120
	I0318 21:52:01.115325   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 62/120
	I0318 21:52:02.116436   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 63/120
	I0318 21:52:03.118380   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 64/120
	I0318 21:52:04.119976   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 65/120
	I0318 21:52:05.121129   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 66/120
	I0318 21:52:06.122338   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 67/120
	I0318 21:52:07.123645   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 68/120
	I0318 21:52:08.125073   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 69/120
	I0318 21:52:09.127171   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 70/120
	I0318 21:52:10.128529   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 71/120
	I0318 21:52:11.129768   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 72/120
	I0318 21:52:12.131308   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 73/120
	I0318 21:52:13.132539   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 74/120
	I0318 21:52:14.134392   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 75/120
	I0318 21:52:15.135674   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 76/120
	I0318 21:52:16.136847   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 77/120
	I0318 21:52:17.138135   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 78/120
	I0318 21:52:18.139364   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 79/120
	I0318 21:52:19.141385   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 80/120
	I0318 21:52:20.142725   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 81/120
	I0318 21:52:21.144038   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 82/120
	I0318 21:52:22.145342   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 83/120
	I0318 21:52:23.146654   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 84/120
	I0318 21:52:24.148478   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 85/120
	I0318 21:52:25.149841   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 86/120
	I0318 21:52:26.151162   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 87/120
	I0318 21:52:27.152525   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 88/120
	I0318 21:52:28.154772   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 89/120
	I0318 21:52:29.157004   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 90/120
	I0318 21:52:30.158320   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 91/120
	I0318 21:52:31.159501   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 92/120
	I0318 21:52:32.160889   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 93/120
	I0318 21:52:33.162323   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 94/120
	I0318 21:52:34.164125   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 95/120
	I0318 21:52:35.165504   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 96/120
	I0318 21:52:36.166820   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 97/120
	I0318 21:52:37.168106   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 98/120
	I0318 21:52:38.169406   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 99/120
	I0318 21:52:39.171499   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 100/120
	I0318 21:52:40.172889   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 101/120
	I0318 21:52:41.174389   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 102/120
	I0318 21:52:42.175659   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 103/120
	I0318 21:52:43.176964   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 104/120
	I0318 21:52:44.178135   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 105/120
	I0318 21:52:45.179562   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 106/120
	I0318 21:52:46.181054   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 107/120
	I0318 21:52:47.182440   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 108/120
	I0318 21:52:48.183640   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 109/120
	I0318 21:52:49.185636   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 110/120
	I0318 21:52:50.187322   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 111/120
	I0318 21:52:51.188594   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 112/120
	I0318 21:52:52.189941   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 113/120
	I0318 21:52:53.191191   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 114/120
	I0318 21:52:54.193093   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 115/120
	I0318 21:52:55.194389   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 116/120
	I0318 21:52:56.195668   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 117/120
	I0318 21:52:57.197303   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 118/120
	I0318 21:52:58.198616   64207 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for machine to stop 119/120
	I0318 21:52:59.199635   64207 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 21:52:59.199702   64207 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 21:52:59.201383   64207 out.go:177] 
	W0318 21:52:59.202911   64207 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 21:52:59.202924   64207 out.go:239] * 
	* 
	W0318 21:52:59.205605   64207 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 21:52:59.206683   64207 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-660775 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775: exit status 3 (18.652645498s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:53:17.861153   64873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host
	E0318 21:53:17.861173   64873 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-660775" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-141758 --alsologtostderr -v=3
E0318 21:51:19.530702   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:51:37.205473   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:51:51.608236   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:51.613481   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:51.623723   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:51.643965   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:51.684213   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:51.764497   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:51.783690   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:51:51.925049   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:52.245862   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-141758 --alsologtostderr -v=3: exit status 82 (2m0.512538188s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-141758"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:51:00.169219   64285 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:51:00.169381   64285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:51:00.169392   64285 out.go:304] Setting ErrFile to fd 2...
	I0318 21:51:00.169398   64285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:51:00.169594   64285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:51:00.169838   64285 out.go:298] Setting JSON to false
	I0318 21:51:00.169927   64285 mustload.go:65] Loading cluster: embed-certs-141758
	I0318 21:51:00.170259   64285 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:51:00.170347   64285 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:51:00.170528   64285 mustload.go:65] Loading cluster: embed-certs-141758
	I0318 21:51:00.170653   64285 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:51:00.170696   64285 stop.go:39] StopHost: embed-certs-141758
	I0318 21:51:00.171080   64285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:51:00.171146   64285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:51:00.185508   64285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0318 21:51:00.186018   64285 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:51:00.186617   64285 main.go:141] libmachine: Using API Version  1
	I0318 21:51:00.186640   64285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:51:00.186987   64285 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:51:00.189435   64285 out.go:177] * Stopping node "embed-certs-141758"  ...
	I0318 21:51:00.190830   64285 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 21:51:00.190870   64285 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:51:00.191101   64285 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 21:51:00.191128   64285 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:51:00.193816   64285 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:51:00.194218   64285 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:49:23 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:51:00.194261   64285 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:51:00.194409   64285 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:51:00.194582   64285 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:51:00.194746   64285 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:51:00.194879   64285 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:51:00.307786   64285 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 21:51:00.366389   64285 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 21:51:00.434750   64285 main.go:141] libmachine: Stopping "embed-certs-141758"...
	I0318 21:51:00.434777   64285 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:51:00.436442   64285 main.go:141] libmachine: (embed-certs-141758) Calling .Stop
	I0318 21:51:00.440089   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 0/120
	I0318 21:51:01.441498   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 1/120
	I0318 21:51:02.442902   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 2/120
	I0318 21:51:03.445107   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 3/120
	I0318 21:51:04.446595   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 4/120
	I0318 21:51:05.448587   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 5/120
	I0318 21:51:06.450206   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 6/120
	I0318 21:51:07.451607   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 7/120
	I0318 21:51:08.453095   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 8/120
	I0318 21:51:09.455430   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 9/120
	I0318 21:51:10.457387   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 10/120
	I0318 21:51:11.459429   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 11/120
	I0318 21:51:12.460819   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 12/120
	I0318 21:51:13.461921   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 13/120
	I0318 21:51:14.463187   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 14/120
	I0318 21:51:15.465195   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 15/120
	I0318 21:51:16.466522   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 16/120
	I0318 21:51:17.467879   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 17/120
	I0318 21:51:18.469152   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 18/120
	I0318 21:51:19.471429   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 19/120
	I0318 21:51:20.473345   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 20/120
	I0318 21:51:21.474616   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 21/120
	I0318 21:51:22.476040   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 22/120
	I0318 21:51:23.477455   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 23/120
	I0318 21:51:24.479410   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 24/120
	I0318 21:51:25.481256   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 25/120
	I0318 21:51:26.483328   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 26/120
	I0318 21:51:27.484600   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 27/120
	I0318 21:51:28.485786   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 28/120
	I0318 21:51:29.487092   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 29/120
	I0318 21:51:30.488839   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 30/120
	I0318 21:51:31.490474   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 31/120
	I0318 21:51:32.491768   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 32/120
	I0318 21:51:33.493312   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 33/120
	I0318 21:51:34.494767   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 34/120
	I0318 21:51:35.496469   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 35/120
	I0318 21:51:36.497822   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 36/120
	I0318 21:51:37.499223   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 37/120
	I0318 21:51:38.500429   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 38/120
	I0318 21:51:39.501801   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 39/120
	I0318 21:51:40.503824   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 40/120
	I0318 21:51:41.505301   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 41/120
	I0318 21:51:42.506729   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 42/120
	I0318 21:51:43.508651   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 43/120
	I0318 21:51:44.509964   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 44/120
	I0318 21:51:45.511869   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 45/120
	I0318 21:51:46.513197   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 46/120
	I0318 21:51:47.514488   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 47/120
	I0318 21:51:48.515850   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 48/120
	I0318 21:51:49.517286   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 49/120
	I0318 21:51:50.519357   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 50/120
	I0318 21:51:51.520821   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 51/120
	I0318 21:51:52.522175   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 52/120
	I0318 21:51:53.523440   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 53/120
	I0318 21:51:54.524783   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 54/120
	I0318 21:51:55.526662   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 55/120
	I0318 21:51:56.527890   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 56/120
	I0318 21:51:57.529313   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 57/120
	I0318 21:51:58.531190   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 58/120
	I0318 21:51:59.532787   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 59/120
	I0318 21:52:00.534814   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 60/120
	I0318 21:52:01.536168   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 61/120
	I0318 21:52:02.537409   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 62/120
	I0318 21:52:03.539368   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 63/120
	I0318 21:52:04.540956   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 64/120
	I0318 21:52:05.542944   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 65/120
	I0318 21:52:06.544377   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 66/120
	I0318 21:52:07.545761   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 67/120
	I0318 21:52:08.547189   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 68/120
	I0318 21:52:09.548417   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 69/120
	I0318 21:52:10.550161   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 70/120
	I0318 21:52:11.551552   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 71/120
	I0318 21:52:12.552845   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 72/120
	I0318 21:52:13.554230   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 73/120
	I0318 21:52:14.555662   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 74/120
	I0318 21:52:15.557041   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 75/120
	I0318 21:52:16.558322   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 76/120
	I0318 21:52:17.559614   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 77/120
	I0318 21:52:18.560878   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 78/120
	I0318 21:52:19.562241   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 79/120
	I0318 21:52:20.564397   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 80/120
	I0318 21:52:21.565927   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 81/120
	I0318 21:52:22.567265   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 82/120
	I0318 21:52:23.568582   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 83/120
	I0318 21:52:24.570027   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 84/120
	I0318 21:52:25.571996   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 85/120
	I0318 21:52:26.573387   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 86/120
	I0318 21:52:27.574739   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 87/120
	I0318 21:52:28.575924   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 88/120
	I0318 21:52:29.577234   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 89/120
	I0318 21:52:30.579313   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 90/120
	I0318 21:52:31.580942   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 91/120
	I0318 21:52:32.582112   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 92/120
	I0318 21:52:33.583516   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 93/120
	I0318 21:52:34.584650   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 94/120
	I0318 21:52:35.586094   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 95/120
	I0318 21:52:36.587519   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 96/120
	I0318 21:52:37.588839   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 97/120
	I0318 21:52:38.590266   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 98/120
	I0318 21:52:39.591494   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 99/120
	I0318 21:52:40.593328   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 100/120
	I0318 21:52:41.595233   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 101/120
	I0318 21:52:42.596633   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 102/120
	I0318 21:52:43.597894   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 103/120
	I0318 21:52:44.599136   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 104/120
	I0318 21:52:45.600778   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 105/120
	I0318 21:52:46.602091   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 106/120
	I0318 21:52:47.603423   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 107/120
	I0318 21:52:48.604753   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 108/120
	I0318 21:52:49.605998   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 109/120
	I0318 21:52:50.607883   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 110/120
	I0318 21:52:51.609204   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 111/120
	I0318 21:52:52.610421   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 112/120
	I0318 21:52:53.611736   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 113/120
	I0318 21:52:54.612986   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 114/120
	I0318 21:52:55.614793   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 115/120
	I0318 21:52:56.616097   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 116/120
	I0318 21:52:57.617312   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 117/120
	I0318 21:52:58.618664   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 118/120
	I0318 21:52:59.620002   64285 main.go:141] libmachine: (embed-certs-141758) Waiting for machine to stop 119/120
	I0318 21:53:00.621334   64285 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 21:53:00.621384   64285 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 21:53:00.623308   64285 out.go:177] 
	W0318 21:53:00.624602   64285 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 21:53:00.624615   64285 out.go:239] * 
	* 
	W0318 21:53:00.627131   64285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 21:53:00.628571   64285 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-141758 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758
E0318 21:53:08.305933   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.311185   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.321406   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.341639   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.381871   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.462350   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.622922   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:08.943498   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:09.584625   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:10.864938   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:13.425601   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:53:13.532028   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758: exit status 3 (18.510442373s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:53:19.141163   64914 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host
	E0318 21:53:19.141182   64914 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-141758" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-963041 --alsologtostderr -v=3
E0318 21:52:12.089783   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:52:13.470224   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:13.475471   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:13.485710   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:13.505999   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:13.546308   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:13.626435   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:13.786828   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:14.107406   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:14.748114   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:16.028845   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:18.589332   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:23.709994   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:52:32.570996   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-963041 --alsologtostderr -v=3: exit status 82 (2m0.49014045s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-963041"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:52:05.053181   64590 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:52:05.053457   64590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:52:05.053467   64590 out.go:304] Setting ErrFile to fd 2...
	I0318 21:52:05.053474   64590 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:52:05.053657   64590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:52:05.053889   64590 out.go:298] Setting JSON to false
	I0318 21:52:05.053977   64590 mustload.go:65] Loading cluster: no-preload-963041
	I0318 21:52:05.054303   64590 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:52:05.054385   64590 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:52:05.054565   64590 mustload.go:65] Loading cluster: no-preload-963041
	I0318 21:52:05.054686   64590 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:52:05.054718   64590 stop.go:39] StopHost: no-preload-963041
	I0318 21:52:05.055110   64590 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:52:05.055165   64590 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:52:05.069544   64590 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0318 21:52:05.070064   64590 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:52:05.070666   64590 main.go:141] libmachine: Using API Version  1
	I0318 21:52:05.070688   64590 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:52:05.071087   64590 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:52:05.073060   64590 out.go:177] * Stopping node "no-preload-963041"  ...
	I0318 21:52:05.074466   64590 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 21:52:05.074493   64590 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:52:05.074708   64590 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 21:52:05.074735   64590 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:52:05.077503   64590 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:52:05.077876   64590 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:52:05.077911   64590 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:52:05.078012   64590 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:52:05.078173   64590 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:52:05.078299   64590 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:52:05.078393   64590 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:52:05.179536   64590 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 21:52:05.245395   64590 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 21:52:05.296012   64590 main.go:141] libmachine: Stopping "no-preload-963041"...
	I0318 21:52:05.296048   64590 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:52:05.297526   64590 main.go:141] libmachine: (no-preload-963041) Calling .Stop
	I0318 21:52:05.300655   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 0/120
	I0318 21:52:06.302089   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 1/120
	I0318 21:52:07.303327   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 2/120
	I0318 21:52:08.304779   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 3/120
	I0318 21:52:09.306185   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 4/120
	I0318 21:52:10.308685   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 5/120
	I0318 21:52:11.310128   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 6/120
	I0318 21:52:12.311486   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 7/120
	I0318 21:52:13.312791   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 8/120
	I0318 21:52:14.314223   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 9/120
	I0318 21:52:15.316443   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 10/120
	I0318 21:52:16.318043   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 11/120
	I0318 21:52:17.319179   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 12/120
	I0318 21:52:18.320470   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 13/120
	I0318 21:52:19.321875   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 14/120
	I0318 21:52:20.323423   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 15/120
	I0318 21:52:21.324816   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 16/120
	I0318 21:52:22.326198   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 17/120
	I0318 21:52:23.327621   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 18/120
	I0318 21:52:24.329057   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 19/120
	I0318 21:52:25.331442   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 20/120
	I0318 21:52:26.332925   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 21/120
	I0318 21:52:27.334197   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 22/120
	I0318 21:52:28.335450   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 23/120
	I0318 21:52:29.336687   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 24/120
	I0318 21:52:30.338689   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 25/120
	I0318 21:52:31.340243   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 26/120
	I0318 21:52:32.341502   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 27/120
	I0318 21:52:33.343598   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 28/120
	I0318 21:52:34.345412   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 29/120
	I0318 21:52:35.347220   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 30/120
	I0318 21:52:36.348657   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 31/120
	I0318 21:52:37.350024   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 32/120
	I0318 21:52:38.351499   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 33/120
	I0318 21:52:39.352832   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 34/120
	I0318 21:52:40.354145   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 35/120
	I0318 21:52:41.355402   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 36/120
	I0318 21:52:42.357044   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 37/120
	I0318 21:52:43.358349   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 38/120
	I0318 21:52:44.359815   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 39/120
	I0318 21:52:45.361929   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 40/120
	I0318 21:52:46.363169   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 41/120
	I0318 21:52:47.364568   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 42/120
	I0318 21:52:48.365761   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 43/120
	I0318 21:52:49.367274   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 44/120
	I0318 21:52:50.369172   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 45/120
	I0318 21:52:51.371545   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 46/120
	I0318 21:52:52.372931   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 47/120
	I0318 21:52:53.374440   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 48/120
	I0318 21:52:54.375653   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 49/120
	I0318 21:52:55.377950   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 50/120
	I0318 21:52:56.379282   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 51/120
	I0318 21:52:57.380468   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 52/120
	I0318 21:52:58.381785   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 53/120
	I0318 21:52:59.382845   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 54/120
	I0318 21:53:00.385170   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 55/120
	I0318 21:53:01.386393   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 56/120
	I0318 21:53:02.387693   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 57/120
	I0318 21:53:03.388884   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 58/120
	I0318 21:53:04.390271   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 59/120
	I0318 21:53:05.392152   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 60/120
	I0318 21:53:06.393440   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 61/120
	I0318 21:53:07.394645   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 62/120
	I0318 21:53:08.396084   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 63/120
	I0318 21:53:09.397184   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 64/120
	I0318 21:53:10.399144   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 65/120
	I0318 21:53:11.400273   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 66/120
	I0318 21:53:12.401963   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 67/120
	I0318 21:53:13.403345   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 68/120
	I0318 21:53:14.404998   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 69/120
	I0318 21:53:15.407192   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 70/120
	I0318 21:53:16.408414   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 71/120
	I0318 21:53:17.409691   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 72/120
	I0318 21:53:18.410907   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 73/120
	I0318 21:53:19.412081   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 74/120
	I0318 21:53:20.413846   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 75/120
	I0318 21:53:21.415275   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 76/120
	I0318 21:53:22.416455   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 77/120
	I0318 21:53:23.417769   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 78/120
	I0318 21:53:24.418992   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 79/120
	I0318 21:53:25.420793   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 80/120
	I0318 21:53:26.422091   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 81/120
	I0318 21:53:27.423312   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 82/120
	I0318 21:53:28.424616   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 83/120
	I0318 21:53:29.425816   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 84/120
	I0318 21:53:30.427379   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 85/120
	I0318 21:53:31.428624   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 86/120
	I0318 21:53:32.429916   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 87/120
	I0318 21:53:33.431185   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 88/120
	I0318 21:53:34.432787   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 89/120
	I0318 21:53:35.434661   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 90/120
	I0318 21:53:36.435864   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 91/120
	I0318 21:53:37.437130   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 92/120
	I0318 21:53:38.438461   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 93/120
	I0318 21:53:39.440157   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 94/120
	I0318 21:53:40.441886   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 95/120
	I0318 21:53:41.443239   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 96/120
	I0318 21:53:42.444541   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 97/120
	I0318 21:53:43.446073   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 98/120
	I0318 21:53:44.447346   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 99/120
	I0318 21:53:45.449511   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 100/120
	I0318 21:53:46.451110   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 101/120
	I0318 21:53:47.452502   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 102/120
	I0318 21:53:48.453878   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 103/120
	I0318 21:53:49.455358   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 104/120
	I0318 21:53:50.457412   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 105/120
	I0318 21:53:51.458813   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 106/120
	I0318 21:53:52.460114   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 107/120
	I0318 21:53:53.461462   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 108/120
	I0318 21:53:54.463131   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 109/120
	I0318 21:53:55.465364   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 110/120
	I0318 21:53:56.466721   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 111/120
	I0318 21:53:57.468161   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 112/120
	I0318 21:53:58.469583   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 113/120
	I0318 21:53:59.471232   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 114/120
	I0318 21:54:00.473196   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 115/120
	I0318 21:54:01.474672   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 116/120
	I0318 21:54:02.476212   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 117/120
	I0318 21:54:03.477577   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 118/120
	I0318 21:54:04.479224   64590 main.go:141] libmachine: (no-preload-963041) Waiting for machine to stop 119/120
	I0318 21:54:05.480673   64590 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 21:54:05.480737   64590 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 21:54:05.482900   64590 out.go:177] 
	W0318 21:54:05.484397   64590 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 21:54:05.484425   64590 out.go:239] * 
	* 
	W0318 21:54:05.487069   64590 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 21:54:05.488518   64590 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-963041 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041
E0318 21:54:07.940489   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:54:15.040945   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.046204   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.056437   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.076719   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.116956   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.197241   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.357617   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:15.678183   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:16.318588   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:17.599193   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:20.160465   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041: exit status 3 (18.675035554s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:54:24.165253   65362 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host
	E0318 21:54:24.165272   65362 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-963041" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-648232 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-648232 create -f testdata/busybox.yaml: exit status 1 (44.516842ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-648232" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-648232 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
E0318 21:52:33.950504   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 6 (229.870315ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:52:34.083199   64709 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-648232" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 6 (228.638565ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:52:34.311912   64739 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-648232" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-648232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0318 21:52:54.430933   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-648232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.669646538s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-648232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-648232 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-648232 describe deploy/metrics-server -n kube-system: exit status 1 (41.816587ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-648232" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-648232 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 6 (226.544336ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:54:25.250989   65477 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-648232" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
E0318 21:53:18.546315   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775: exit status 3 (3.168219708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:53:21.029213   64978 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host
	E0318 21:53:21.029240   64978 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-660775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0318 21:53:21.833135   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:21.838385   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:21.848627   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:21.868850   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:21.909105   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:21.989434   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:22.149816   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-660775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1531258s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-660775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775: exit status 3 (3.062706886s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:53:30.245219   65099 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host
	E0318 21:53:30.245240   65099 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.150:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-660775" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758: exit status 3 (3.167939336s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:53:22.309169   65009 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host
	E0318 21:53:22.309196   65009 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-141758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0318 21:53:22.411918   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:53:22.470761   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:23.111786   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:24.392809   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:26.953892   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-141758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152984333s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-141758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758
E0318 21:53:28.787400   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758: exit status 3 (3.063039525s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:53:31.525194   65129 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host
	E0318 21:53:31.525214   65129 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.243:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-141758" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041: exit status 3 (3.167750061s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:54:27.333220   65426 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host
	E0318 21:54:27.333241   65426 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-963041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-963041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152761007s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-963041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041
E0318 21:54:35.452427   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:54:35.521607   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:35.624855   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041: exit status 3 (3.063199311s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 21:54:36.549228   65657 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host
	E0318 21:54:36.549251   65657 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-963041" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (750.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-648232 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0318 21:54:30.228382   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-648232 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m27.500460126s)

                                                
                                                
-- stdout --
	* [old-k8s-version-648232] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-648232" primary control-plane node in "old-k8s-version-648232" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:54:29.973671   65622 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:29.973939   65622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:29.973950   65622 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:29.973954   65622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:29.974149   65622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:29.974638   65622 out.go:298] Setting JSON to false
	I0318 21:54:29.975522   65622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5814,"bootTime":1710793056,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:29.975577   65622 start.go:139] virtualization: kvm guest
	I0318 21:54:29.977628   65622 out.go:177] * [old-k8s-version-648232] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:29.979040   65622 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:29.980302   65622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:29.979071   65622 notify.go:220] Checking for updates...
	I0318 21:54:29.982879   65622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:29.984178   65622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:29.985378   65622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:29.986562   65622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:29.988115   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:54:29.988476   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:29.988515   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:30.003037   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0318 21:54:30.003370   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:30.003854   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:54:30.003873   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:30.004189   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:30.004355   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:54:30.005993   65622 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 21:54:30.007280   65622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:30.007551   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:30.007584   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:30.021471   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34861
	I0318 21:54:30.021842   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:30.022216   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:54:30.022243   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:30.022539   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:30.022691   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:54:30.055062   65622 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:30.056182   65622 start.go:297] selected driver: kvm2
	I0318 21:54:30.056194   65622 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:30.056302   65622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:30.056978   65622 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:30.057079   65622 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:30.070374   65622 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:30.070696   65622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:30.070754   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:54:30.070766   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:30.070803   65622 start.go:340] cluster config:
	{Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:30.070889   65622 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:30.072531   65622 out.go:177] * Starting "old-k8s-version-648232" primary control-plane node in "old-k8s-version-648232" cluster
	I0318 21:54:30.073721   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:54:30.073747   65622 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 21:54:30.073757   65622 cache.go:56] Caching tarball of preloaded images
	I0318 21:54:30.073836   65622 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 21:54:30.073846   65622 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 21:54:30.073928   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:54:30.074079   65622 start.go:360] acquireMachinesLock for old-k8s-version-648232: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	* 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	* 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-648232 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (239.31808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-648232 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-648232 logs -n 25: (1.568303812s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo cat                              | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:54:36.607114   65699 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:36.607254   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607266   65699 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:36.607272   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607706   65699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:36.608596   65699 out.go:298] Setting JSON to false
	I0318 21:54:36.609468   65699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5821,"bootTime":1710793056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:36.609529   65699 start.go:139] virtualization: kvm guest
	I0318 21:54:36.611401   65699 out.go:177] * [no-preload-963041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:36.612703   65699 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:36.612704   65699 notify.go:220] Checking for updates...
	I0318 21:54:36.613976   65699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:36.615157   65699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:36.616283   65699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:36.617431   65699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:36.618615   65699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:36.620094   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:54:36.620490   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.620537   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.634914   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0318 21:54:36.635251   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.635706   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.635728   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.636019   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.636173   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.636411   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:36.636719   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.636756   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.650608   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0318 21:54:36.650946   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.651358   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.651383   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.651694   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.651832   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.682407   65699 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:36.683826   65699 start.go:297] selected driver: kvm2
	I0318 21:54:36.683837   65699 start.go:901] validating driver "kvm2" against &{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.683941   65699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:36.684624   65699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.684696   65699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:36.699415   65699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:36.699766   65699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:36.699827   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:54:36.699840   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:36.699883   65699 start.go:340] cluster config:
	{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.699984   65699 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.701584   65699 out.go:177] * Starting "no-preload-963041" primary control-plane node in "no-preload-963041" cluster
	I0318 21:54:36.702792   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:54:36.702911   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:54:36.703027   65699 cache.go:107] acquiring lock: {Name:mk20bcc8d34b80cc44c1e33bc5e0ec5cd82ba46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703044   65699 cache.go:107] acquiring lock: {Name:mk299438a86024ea6c96280d8bbe30c1283fa996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703087   65699 cache.go:107] acquiring lock: {Name:mkf5facbc69c16807f75e75a80a4afa3f97a0ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703124   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0318 21:54:36.703127   65699 start.go:360] acquireMachinesLock for no-preload-963041: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:54:36.703141   65699 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 102.209µs
	I0318 21:54:36.703156   65699 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0318 21:54:36.703104   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 21:54:36.703174   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 21:54:36.703172   65699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.262µs
	I0318 21:54:36.703190   65699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703043   65699 cache.go:107] acquiring lock: {Name:mk4c82b4e60b551671fa99921294b8e1f551d382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703189   65699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 104.037µs
	I0318 21:54:36.703209   65699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 21:54:36.703137   65699 cache.go:107] acquiring lock: {Name:mk847ac7ddb8863389782289e61001579ff6ec5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703204   65699 cache.go:107] acquiring lock: {Name:mk1bf8cc3e30a7cf88f25697f1021501ea6ee4ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703243   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 21:54:36.703254   65699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 163.57µs
	I0318 21:54:36.703233   65699 cache.go:107] acquiring lock: {Name:mkf9c9b33c4d1ca54e3364ad39dcd3b10bc50534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703265   65699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703224   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 21:54:36.703282   65699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 247.672µs
	I0318 21:54:36.703293   65699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 21:54:36.703293   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 21:54:36.703293   65699 cache.go:107] acquiring lock: {Name:mkd0bd00e6f69df37097a8ce792bcc8844efbc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703315   65699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.33µs
	I0318 21:54:36.703329   65699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 21:54:36.703363   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 21:54:36.703385   65699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 207.404µs
	I0318 21:54:36.703400   65699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703411   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 21:54:36.703419   65699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 164.5µs
	I0318 21:54:36.703435   65699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703447   65699 cache.go:87] Successfully saved all images to host disk.
	I0318 21:54:40.421098   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:43.493261   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:49.573105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:52.645158   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:58.725124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:01.797077   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:07.877116   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:10.949096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:17.029117   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:20.101131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:26.181141   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:29.253113   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:35.333097   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:38.405132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:44.485208   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:47.557123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:53.637185   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:56.709102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:02.789134   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:05.861146   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:11.941102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:15.013092   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:21.093132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:24.165129   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:30.245127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:33.317151   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:39.397126   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:42.469163   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:48.549145   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:51.621085   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:57.701118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:00.773108   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:06.853105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:09.925096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:16.005131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:19.077111   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:25.157130   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:28.229107   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:34.309152   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:37.381127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:43.461123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:46.533127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:52.613124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:55.685135   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:01.765118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:04.837197   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:07.840986   65211 start.go:364] duration metric: took 4m36.169318619s to acquireMachinesLock for "embed-certs-141758"
	I0318 21:58:07.841046   65211 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:07.841054   65211 fix.go:54] fixHost starting: 
	I0318 21:58:07.841507   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:07.841544   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:07.856544   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0318 21:58:07.856976   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:07.857424   65211 main.go:141] libmachine: Using API Version  1
	I0318 21:58:07.857452   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:07.857783   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:07.857971   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:07.858126   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:58:07.859909   65211 fix.go:112] recreateIfNeeded on embed-certs-141758: state=Stopped err=<nil>
	I0318 21:58:07.859947   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	W0318 21:58:07.860120   65211 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:07.862134   65211 out.go:177] * Restarting existing kvm2 VM for "embed-certs-141758" ...
	I0318 21:58:07.838706   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:07.838746   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839036   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:58:07.839060   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839263   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:58:07.840867   65170 machine.go:97] duration metric: took 4m37.426711052s to provisionDockerMachine
	I0318 21:58:07.840915   65170 fix.go:56] duration metric: took 4m37.446713188s for fixHost
	I0318 21:58:07.840923   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 4m37.446748943s
	W0318 21:58:07.840945   65170 start.go:713] error starting host: provision: host is not running
	W0318 21:58:07.841017   65170 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 21:58:07.841026   65170 start.go:728] Will try again in 5 seconds ...
	I0318 21:58:07.863352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Start
	I0318 21:58:07.863483   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring networks are active...
	I0318 21:58:07.864202   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network default is active
	I0318 21:58:07.864652   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network mk-embed-certs-141758 is active
	I0318 21:58:07.865077   65211 main.go:141] libmachine: (embed-certs-141758) Getting domain xml...
	I0318 21:58:07.865858   65211 main.go:141] libmachine: (embed-certs-141758) Creating domain...
	I0318 21:58:09.026367   65211 main.go:141] libmachine: (embed-certs-141758) Waiting to get IP...
	I0318 21:58:09.027144   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.027524   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.027580   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.027503   66223 retry.go:31] will retry after 260.499882ms: waiting for machine to come up
	I0318 21:58:09.289935   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.290490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.290522   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.290450   66223 retry.go:31] will retry after 328.000758ms: waiting for machine to come up
	I0318 21:58:09.619947   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.620337   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.620384   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.620305   66223 retry.go:31] will retry after 419.640035ms: waiting for machine to come up
	I0318 21:58:10.041775   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.042186   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.042213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.042134   66223 retry.go:31] will retry after 482.732439ms: waiting for machine to come up
	I0318 21:58:10.526892   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.527282   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.527307   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.527253   66223 retry.go:31] will retry after 718.696645ms: waiting for machine to come up
	I0318 21:58:11.247165   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.247545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.247571   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.247501   66223 retry.go:31] will retry after 603.951593ms: waiting for machine to come up
	I0318 21:58:12.842928   65170 start.go:360] acquireMachinesLock for default-k8s-diff-port-660775: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:11.853119   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.853408   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.853438   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.853362   66223 retry.go:31] will retry after 1.191963995s: waiting for machine to come up
	I0318 21:58:13.046915   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:13.047289   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:13.047319   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:13.047237   66223 retry.go:31] will retry after 1.314666633s: waiting for machine to come up
	I0318 21:58:14.363693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:14.364109   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:14.364135   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:14.364064   66223 retry.go:31] will retry after 1.341191632s: waiting for machine to come up
	I0318 21:58:15.707425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:15.707921   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:15.707951   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:15.707862   66223 retry.go:31] will retry after 1.887572842s: waiting for machine to come up
	I0318 21:58:17.596545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:17.596970   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:17.597002   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:17.596899   66223 retry.go:31] will retry after 2.820006704s: waiting for machine to come up
	I0318 21:58:20.420327   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:20.420693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:20.420714   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:20.420659   66223 retry.go:31] will retry after 3.099836206s: waiting for machine to come up
	I0318 21:58:23.522155   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:23.522490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:23.522517   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:23.522450   66223 retry.go:31] will retry after 4.512794132s: waiting for machine to come up
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:28.036425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.036898   65211 main.go:141] libmachine: (embed-certs-141758) Found IP for machine: 192.168.39.243
	I0318 21:58:28.036949   65211 main.go:141] libmachine: (embed-certs-141758) Reserving static IP address...
	I0318 21:58:28.036967   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has current primary IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.037428   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.037452   65211 main.go:141] libmachine: (embed-certs-141758) DBG | skip adding static IP to network mk-embed-certs-141758 - found existing host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"}
	I0318 21:58:28.037461   65211 main.go:141] libmachine: (embed-certs-141758) Reserved static IP address: 192.168.39.243
	I0318 21:58:28.037473   65211 main.go:141] libmachine: (embed-certs-141758) Waiting for SSH to be available...
	I0318 21:58:28.037485   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Getting to WaitForSSH function...
	I0318 21:58:28.039459   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039778   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.039810   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039928   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH client type: external
	I0318 21:58:28.039955   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa (-rw-------)
	I0318 21:58:28.039995   65211 main.go:141] libmachine: (embed-certs-141758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:28.040027   65211 main.go:141] libmachine: (embed-certs-141758) DBG | About to run SSH command:
	I0318 21:58:28.040044   65211 main.go:141] libmachine: (embed-certs-141758) DBG | exit 0
	I0318 21:58:28.169219   65211 main.go:141] libmachine: (embed-certs-141758) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:28.169554   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetConfigRaw
	I0318 21:58:28.170153   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.172372   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.172760   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.172787   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.173016   65211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:58:28.173186   65211 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:28.173203   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:28.173399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.175433   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175767   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.175802   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175920   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.176079   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176389   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.176553   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.176790   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.176805   65211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:28.285370   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:28.285407   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285629   65211 buildroot.go:166] provisioning hostname "embed-certs-141758"
	I0318 21:58:28.285651   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285856   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.288382   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288708   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.288739   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288863   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.289067   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289220   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289361   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.289515   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.289717   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.289735   65211 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-141758 && echo "embed-certs-141758" | sudo tee /etc/hostname
	I0318 21:58:28.420311   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-141758
	
	I0318 21:58:28.420351   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.422864   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.423245   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423431   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.423608   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423759   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423891   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.424044   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.424234   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.424256   65211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-141758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-141758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-141758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:28.549277   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:28.549307   65211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:28.549325   65211 buildroot.go:174] setting up certificates
	I0318 21:58:28.549334   65211 provision.go:84] configureAuth start
	I0318 21:58:28.549343   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.549572   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.551881   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552183   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.552205   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.554341   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554629   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.554656   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554752   65211 provision.go:143] copyHostCerts
	I0318 21:58:28.554812   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:28.554825   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:28.554912   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:28.555020   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:28.555032   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:28.555062   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:28.555145   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:28.555155   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:28.555192   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:28.555259   65211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-141758 san=[127.0.0.1 192.168.39.243 embed-certs-141758 localhost minikube]
	I0318 21:58:28.706111   65211 provision.go:177] copyRemoteCerts
	I0318 21:58:28.706158   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:28.706185   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.708537   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708795   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.708822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.709164   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.709335   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.709446   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:28.796199   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:28.827207   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:58:28.854273   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:28.880505   65211 provision.go:87] duration metric: took 331.161751ms to configureAuth
	I0318 21:58:28.880524   65211 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:28.880716   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:58:28.880801   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.883232   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.883583   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883753   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.883926   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884087   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884186   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.884339   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.884481   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.884496   65211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:29.164330   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:29.164357   65211 machine.go:97] duration metric: took 991.159236ms to provisionDockerMachine
	I0318 21:58:29.164370   65211 start.go:293] postStartSetup for "embed-certs-141758" (driver="kvm2")
	I0318 21:58:29.164381   65211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:29.164434   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.164734   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:29.164758   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.167400   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167696   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.167719   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167867   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.168065   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.168235   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.168352   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.256141   65211 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:29.261086   65211 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:29.261104   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:29.261157   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:29.261229   65211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:29.261309   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:29.271174   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:29.297161   65211 start.go:296] duration metric: took 132.781067ms for postStartSetup
	I0318 21:58:29.297192   65211 fix.go:56] duration metric: took 21.456139061s for fixHost
	I0318 21:58:29.297208   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.299741   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300102   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.300127   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300289   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.300480   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300750   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.300864   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:29.301028   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:29.301039   65211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:29.413842   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799109.363417589
	
	I0318 21:58:29.413869   65211 fix.go:216] guest clock: 1710799109.363417589
	I0318 21:58:29.413876   65211 fix.go:229] Guest: 2024-03-18 21:58:29.363417589 +0000 UTC Remote: 2024-03-18 21:58:29.297195181 +0000 UTC m=+297.765354372 (delta=66.222408ms)
	I0318 21:58:29.413892   65211 fix.go:200] guest clock delta is within tolerance: 66.222408ms
	I0318 21:58:29.413899   65211 start.go:83] releasing machines lock for "embed-certs-141758", held for 21.572869797s
	I0318 21:58:29.413932   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.414191   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:29.416929   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417293   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.417318   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417500   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418019   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418159   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418230   65211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:29.418275   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.418330   65211 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:29.418344   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.420728   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421022   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421053   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421076   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421228   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421413   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421464   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421493   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421593   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.421673   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421749   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.421828   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421960   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.422081   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.502548   65211 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:29.531994   65211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:29.681482   65211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:29.689671   65211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:29.689735   65211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:29.711660   65211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:29.711682   65211 start.go:494] detecting cgroup driver to use...
	I0318 21:58:29.711750   65211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:29.728159   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:29.742409   65211 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:29.742450   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:29.757587   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:29.772218   65211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:29.883164   65211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:30.046773   65211 docker.go:233] disabling docker service ...
	I0318 21:58:30.046845   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:30.065878   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:30.081551   65211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:30.223188   65211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:30.353535   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:30.370291   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:30.391728   65211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:58:30.391789   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.409204   65211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:30.409281   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.426464   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.439964   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.452097   65211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:30.464410   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.475990   65211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.495092   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.506831   65211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:30.517410   65211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:30.517463   65211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:30.532465   65211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:30.543958   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:30.679788   65211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:30.839388   65211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:30.839466   65211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:30.844666   65211 start.go:562] Will wait 60s for crictl version
	I0318 21:58:30.844720   65211 ssh_runner.go:195] Run: which crictl
	I0318 21:58:30.848886   65211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:30.888598   65211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:30.888686   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.921097   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.954037   65211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:58:30.955378   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:30.958352   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.958792   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:30.958822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.959064   65211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:30.963556   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:30.977788   65211 kubeadm.go:877] updating cluster {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:30.977899   65211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:58:30.977949   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:31.018843   65211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:58:31.018926   65211 ssh_runner.go:195] Run: which lz4
	I0318 21:58:31.023589   65211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:31.028416   65211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:31.028445   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:32.944637   65211 crio.go:462] duration metric: took 1.921083293s to copy over tarball
	I0318 21:58:32.944709   65211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:35.696230   65211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751490576s)
	I0318 21:58:35.696261   65211 crio.go:469] duration metric: took 2.751600779s to extract the tarball
	I0318 21:58:35.696271   65211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:35.739467   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:35.794398   65211 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:58:35.794427   65211 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:58:35.794436   65211 kubeadm.go:928] updating node { 192.168.39.243 8443 v1.28.4 crio true true} ...
	I0318 21:58:35.794559   65211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-141758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:35.794625   65211 ssh_runner.go:195] Run: crio config
	I0318 21:58:35.844849   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:35.844877   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:35.844888   65211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:35.844923   65211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-141758 NodeName:embed-certs-141758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:58:35.845069   65211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-141758"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:35.845124   65211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:58:35.856885   65211 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:35.856950   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:35.867990   65211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 21:58:35.887057   65211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:35.909244   65211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 21:58:35.931267   65211 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:35.935793   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:35.950323   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:36.093377   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:36.112548   65211 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758 for IP: 192.168.39.243
	I0318 21:58:36.112575   65211 certs.go:194] generating shared ca certs ...
	I0318 21:58:36.112596   65211 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:36.112766   65211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:36.112813   65211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:36.112822   65211 certs.go:256] generating profile certs ...
	I0318 21:58:36.112943   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/client.key
	I0318 21:58:36.113043   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key.d575a4ae
	I0318 21:58:36.113097   65211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key
	I0318 21:58:36.113263   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:36.113307   65211 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:36.113322   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:36.113359   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:36.113396   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:36.113429   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:36.113536   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:36.114412   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:36.147930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:36.177554   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:36.208374   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:36.243425   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:58:36.276720   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:58:36.317930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:36.345717   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:58:36.371655   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:36.396998   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:36.422750   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:36.448117   65211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:36.466558   65211 ssh_runner.go:195] Run: openssl version
	I0318 21:58:36.472888   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:36.484389   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489534   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489585   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.496045   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:36.507723   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:36.519030   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524214   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524267   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.531109   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:36.543912   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:36.556130   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561330   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561369   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.567883   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:36.579819   65211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:36.824046   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:36.831273   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:36.838571   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:36.845621   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:36.852423   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:36.859433   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:36.866091   65211 kubeadm.go:391] StartCluster: {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:36.866212   65211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:36.866263   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:36.912390   65211 cri.go:89] found id: ""
	I0318 21:58:36.912460   65211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:36.929896   65211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:36.929923   65211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:36.929931   65211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:36.929985   65211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:36.947191   65211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:36.948613   65211 kubeconfig.go:125] found "embed-certs-141758" server: "https://192.168.39.243:8443"
	I0318 21:58:36.951641   65211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:36.966095   65211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.243
	I0318 21:58:36.966135   65211 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:36.966150   65211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:36.966216   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:37.022620   65211 cri.go:89] found id: ""
	I0318 21:58:37.022680   65211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:37.042338   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:37.054534   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:37.054552   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:37.054588   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:37.066099   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:37.066166   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:37.077340   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:37.088158   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:37.088214   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:37.099190   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.110081   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:37.110118   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.121852   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:37.133161   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:37.133215   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:37.144199   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:37.155593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.271593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.921199   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.175721   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.264478   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.377591   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:38.377683   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:38.878031   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.377859   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.417546   65211 api_server.go:72] duration metric: took 1.039957218s to wait for apiserver process to appear ...
	I0318 21:58:39.417576   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:58:39.417599   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:39.418125   65211 api_server.go:269] stopped: https://192.168.39.243:8443/healthz: Get "https://192.168.39.243:8443/healthz": dial tcp 192.168.39.243:8443: connect: connection refused
	I0318 21:58:39.917663   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.450620   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.450656   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.450668   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.489722   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.489755   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.918487   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.924551   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:42.924584   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.418077   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.424938   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:43.424969   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.918053   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.922905   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 21:58:43.931126   65211 api_server.go:141] control plane version: v1.28.4
	I0318 21:58:43.931151   65211 api_server.go:131] duration metric: took 4.513568499s to wait for apiserver health ...
	I0318 21:58:43.931159   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:43.931173   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:43.932876   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:43.934155   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:58:43.948750   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:58:43.978849   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:58:43.991046   65211 system_pods.go:59] 8 kube-system pods found
	I0318 21:58:43.991082   65211 system_pods.go:61] "coredns-5dd5756b68-r9pft" [add358cf-d544-4107-a05f-5e60542ea456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:58:43.991089   65211 system_pods.go:61] "etcd-embed-certs-141758" [31274121-ec65-46b5-bcda-65698c28bd1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:58:43.991095   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [61e4c0db-7a20-4c93-83b3-de4738e82614] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:58:43.991100   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [c2ffe900-4e3a-4c21-ae8f-cd42475207c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:58:43.991105   65211 system_pods.go:61] "kube-proxy-klmnb" [45b0c762-4eaf-4e8a-b321-0d474f61086e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:58:43.991109   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [5aeed9aa-9d98-49c0-bf8a-3998738f6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:58:43.991114   65211 system_pods.go:61] "metrics-server-57f55c9bc5-vt7hj" [949e4c0f-6a76-4141-b30c-f27291873f14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:58:43.991123   65211 system_pods.go:61] "storage-provisioner" [0aca1af6-3221-4698-915b-cabb9da662bf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:58:43.991128   65211 system_pods.go:74] duration metric: took 12.25858ms to wait for pod list to return data ...
	I0318 21:58:43.991136   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:58:43.996109   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:58:43.996135   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 21:58:43.996146   65211 node_conditions.go:105] duration metric: took 5.004614ms to run NodePressure ...
	I0318 21:58:43.996163   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:44.227606   65211 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234823   65211 kubeadm.go:733] kubelet initialised
	I0318 21:58:44.234846   65211 kubeadm.go:734] duration metric: took 7.215375ms waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234854   65211 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:58:44.241197   65211 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.248990   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249008   65211 pod_ready.go:81] duration metric: took 7.784519ms for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.249016   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249022   65211 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.254792   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254820   65211 pod_ready.go:81] duration metric: took 5.788084ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.254833   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254846   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.261248   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261272   65211 pod_ready.go:81] duration metric: took 6.415486ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.261282   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261291   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.383016   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383056   65211 pod_ready.go:81] duration metric: took 121.750871ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.383069   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383078   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784241   65211 pod_ready.go:92] pod "kube-proxy-klmnb" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:44.784264   65211 pod_ready.go:81] duration metric: took 401.177044ms for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784272   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:48.950018   65699 start.go:364] duration metric: took 4m12.246849763s to acquireMachinesLock for "no-preload-963041"
	I0318 21:58:48.950078   65699 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:48.950087   65699 fix.go:54] fixHost starting: 
	I0318 21:58:48.950522   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:48.950556   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:48.966094   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0318 21:58:48.966492   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:48.966970   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:58:48.966994   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:48.967295   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:48.967443   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:58:48.967548   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:58:48.968800   65699 fix.go:112] recreateIfNeeded on no-preload-963041: state=Stopped err=<nil>
	I0318 21:58:48.968835   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	W0318 21:58:48.969105   65699 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:48.970900   65699 out.go:177] * Restarting existing kvm2 VM for "no-preload-963041" ...
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:46.795337   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:49.295100   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.299437   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:48.972407   65699 main.go:141] libmachine: (no-preload-963041) Calling .Start
	I0318 21:58:48.972572   65699 main.go:141] libmachine: (no-preload-963041) Ensuring networks are active...
	I0318 21:58:48.973251   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network default is active
	I0318 21:58:48.973606   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network mk-no-preload-963041 is active
	I0318 21:58:48.973992   65699 main.go:141] libmachine: (no-preload-963041) Getting domain xml...
	I0318 21:58:48.974629   65699 main.go:141] libmachine: (no-preload-963041) Creating domain...
	I0318 21:58:50.190010   65699 main.go:141] libmachine: (no-preload-963041) Waiting to get IP...
	I0318 21:58:50.190750   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.191241   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.191320   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.191220   66466 retry.go:31] will retry after 238.162453ms: waiting for machine to come up
	I0318 21:58:50.430778   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.431262   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.431292   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.431191   66466 retry.go:31] will retry after 318.744541ms: waiting for machine to come up
	I0318 21:58:50.751612   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.752051   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.752086   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.752007   66466 retry.go:31] will retry after 464.29047ms: waiting for machine to come up
	I0318 21:58:51.218462   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.219034   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.219062   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.218983   66466 retry.go:31] will retry after 476.466311ms: waiting for machine to come up
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:53.792483   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.696634   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.697179   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.697208   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.697099   66466 retry.go:31] will retry after 520.896381ms: waiting for machine to come up
	I0318 21:58:52.219861   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:52.220480   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:52.220506   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:52.220414   66466 retry.go:31] will retry after 872.240898ms: waiting for machine to come up
	I0318 21:58:53.094123   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.094547   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.094580   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.094499   66466 retry.go:31] will retry after 757.325359ms: waiting for machine to come up
	I0318 21:58:53.852954   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.853422   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.853453   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.853358   66466 retry.go:31] will retry after 1.459327383s: waiting for machine to come up
	I0318 21:58:55.313969   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:55.314382   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:55.314413   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:55.314328   66466 retry.go:31] will retry after 1.373606235s: waiting for machine to come up
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:57.057776   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:57.791727   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:57.791754   65211 pod_ready.go:81] duration metric: took 13.007474768s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:57.791769   65211 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:59.800074   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:56.689643   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:56.690039   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:56.690064   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:56.690020   66466 retry.go:31] will retry after 1.905319343s: waiting for machine to come up
	I0318 21:58:58.597961   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:58.598470   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:58.598501   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:58.598420   66466 retry.go:31] will retry after 2.720364267s: waiting for machine to come up
	I0318 21:59:01.321901   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:01.322290   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:01.322312   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:01.322254   66466 retry.go:31] will retry after 2.73029124s: waiting for machine to come up
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.299143   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.800475   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.054294   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:04.054715   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:04.054752   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:04.054671   66466 retry.go:31] will retry after 3.148777081s: waiting for machine to come up
	I0318 21:59:08.706453   65170 start.go:364] duration metric: took 55.86344587s to acquireMachinesLock for "default-k8s-diff-port-660775"
	I0318 21:59:08.706504   65170 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:59:08.706515   65170 fix.go:54] fixHost starting: 
	I0318 21:59:08.706934   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:08.706970   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:08.723564   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0318 21:59:08.723935   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:08.724359   65170 main.go:141] libmachine: Using API Version  1
	I0318 21:59:08.724381   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:08.724671   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:08.724874   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:08.725045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:59:08.726635   65170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-660775: state=Stopped err=<nil>
	I0318 21:59:08.726656   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	W0318 21:59:08.726813   65170 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:59:08.728839   65170 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-660775" ...
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.730181   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Start
	I0318 21:59:08.730374   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring networks are active...
	I0318 21:59:08.731140   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network default is active
	I0318 21:59:08.731488   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network mk-default-k8s-diff-port-660775 is active
	I0318 21:59:08.731850   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Getting domain xml...
	I0318 21:59:08.732544   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Creating domain...
	I0318 21:59:10.014924   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting to get IP...
	I0318 21:59:10.015822   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016215   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016299   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.016206   66608 retry.go:31] will retry after 301.369371ms: waiting for machine to come up
	I0318 21:59:07.205807   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206239   65699 main.go:141] libmachine: (no-preload-963041) Found IP for machine: 192.168.72.84
	I0318 21:59:07.206266   65699 main.go:141] libmachine: (no-preload-963041) Reserving static IP address...
	I0318 21:59:07.206281   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has current primary IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206636   65699 main.go:141] libmachine: (no-preload-963041) Reserved static IP address: 192.168.72.84
	I0318 21:59:07.206659   65699 main.go:141] libmachine: (no-preload-963041) Waiting for SSH to be available...
	I0318 21:59:07.206686   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.206711   65699 main.go:141] libmachine: (no-preload-963041) DBG | skip adding static IP to network mk-no-preload-963041 - found existing host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"}
	I0318 21:59:07.206728   65699 main.go:141] libmachine: (no-preload-963041) DBG | Getting to WaitForSSH function...
	I0318 21:59:07.208790   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209157   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.209202   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209306   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH client type: external
	I0318 21:59:07.209331   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa (-rw-------)
	I0318 21:59:07.209367   65699 main.go:141] libmachine: (no-preload-963041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:07.209381   65699 main.go:141] libmachine: (no-preload-963041) DBG | About to run SSH command:
	I0318 21:59:07.209395   65699 main.go:141] libmachine: (no-preload-963041) DBG | exit 0
	I0318 21:59:07.337357   65699 main.go:141] libmachine: (no-preload-963041) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:07.337688   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetConfigRaw
	I0318 21:59:07.338258   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.340609   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.340957   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.340996   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.341213   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:59:07.341396   65699 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:07.341462   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:07.341668   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.343956   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344275   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.344311   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344395   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.344580   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344756   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344891   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.345086   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.345264   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.345276   65699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:07.457491   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:07.457543   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457778   65699 buildroot.go:166] provisioning hostname "no-preload-963041"
	I0318 21:59:07.457802   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457975   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.460729   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461120   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.461145   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461286   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.461480   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461643   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461797   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.461980   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.462179   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.462193   65699 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-963041 && echo "no-preload-963041" | sudo tee /etc/hostname
	I0318 21:59:07.592194   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-963041
	
	I0318 21:59:07.592219   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.594794   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595141   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.595177   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595305   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.595484   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595673   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595836   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.595987   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.596144   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.596160   65699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-963041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-963041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-963041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:07.719593   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:07.719622   65699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:07.719655   65699 buildroot.go:174] setting up certificates
	I0318 21:59:07.719667   65699 provision.go:84] configureAuth start
	I0318 21:59:07.719681   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.719928   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.722544   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.722907   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.722935   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.723095   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.725108   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725391   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.725420   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725522   65699 provision.go:143] copyHostCerts
	I0318 21:59:07.725582   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:07.725595   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:07.725665   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:07.725780   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:07.725792   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:07.725817   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:07.725874   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:07.725881   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:07.725898   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:07.725945   65699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.no-preload-963041 san=[127.0.0.1 192.168.72.84 localhost minikube no-preload-963041]
	I0318 21:59:07.893632   65699 provision.go:177] copyRemoteCerts
	I0318 21:59:07.893685   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:07.893711   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.896227   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896501   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.896527   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896692   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.896859   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.897035   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.897205   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:07.983501   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:08.014432   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:59:08.043755   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:59:08.074388   65699 provision.go:87] duration metric: took 354.707214ms to configureAuth
	I0318 21:59:08.074413   65699 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:08.074571   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:08.074638   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.077314   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077658   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.077690   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077837   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.077996   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078150   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078289   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.078435   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.078582   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.078596   65699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:08.446711   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:08.446745   65699 machine.go:97] duration metric: took 1.105332987s to provisionDockerMachine
	I0318 21:59:08.446757   65699 start.go:293] postStartSetup for "no-preload-963041" (driver="kvm2")
	I0318 21:59:08.446772   65699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:08.446787   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.447090   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:08.447118   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.449551   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.449917   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.449955   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.450117   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.450308   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.450471   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.450611   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.542283   65699 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:08.547389   65699 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:08.547423   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:08.547501   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:08.547606   65699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:08.547732   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:08.558721   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:08.586136   65699 start.go:296] duration metric: took 139.367706ms for postStartSetup
	I0318 21:59:08.586177   65699 fix.go:56] duration metric: took 19.636089577s for fixHost
	I0318 21:59:08.586201   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.588809   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589192   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.589219   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589435   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.589604   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589731   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589838   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.589972   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.590182   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.590197   65699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:08.706260   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799148.650279332
	
	I0318 21:59:08.706283   65699 fix.go:216] guest clock: 1710799148.650279332
	I0318 21:59:08.706293   65699 fix.go:229] Guest: 2024-03-18 21:59:08.650279332 +0000 UTC Remote: 2024-03-18 21:59:08.586181408 +0000 UTC m=+272.029432082 (delta=64.097924ms)
	I0318 21:59:08.706337   65699 fix.go:200] guest clock delta is within tolerance: 64.097924ms
	I0318 21:59:08.706350   65699 start.go:83] releasing machines lock for "no-preload-963041", held for 19.756290817s
	I0318 21:59:08.706384   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.706707   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:08.709113   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709389   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.709417   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709561   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710009   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710155   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710229   65699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:08.710278   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.710330   65699 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:08.710349   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.713131   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713154   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713464   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713492   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713521   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713536   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713632   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713739   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713824   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713987   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713988   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714117   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.714177   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714337   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.827151   65699 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:08.833847   65699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:08.985638   65699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:08.992294   65699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:08.992372   65699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:09.009419   65699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:09.009444   65699 start.go:494] detecting cgroup driver to use...
	I0318 21:59:09.009509   65699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:09.031942   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:09.051842   65699 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:09.051901   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:09.068136   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:09.084445   65699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:09.234323   65699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:09.402144   65699 docker.go:233] disabling docker service ...
	I0318 21:59:09.402210   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:09.419960   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:09.434836   65699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:09.572242   65699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:09.718817   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:09.734607   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:09.756470   65699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:09.756533   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.768595   65699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:09.768685   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.780726   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.800700   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.817396   65699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:09.829896   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.842211   65699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.867273   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.880909   65699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:09.893254   65699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:09.893297   65699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:09.910897   65699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:09.922400   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:10.065248   65699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:10.223498   65699 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:10.223577   65699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:10.230686   65699 start.go:562] Will wait 60s for crictl version
	I0318 21:59:10.230752   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.235527   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:10.278655   65699 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:10.278756   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.310992   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.344925   65699 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:59:07.298973   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:09.799803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:10.346255   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:10.349081   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349418   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:10.349437   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349657   65699 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:10.354793   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:10.369744   65699 kubeadm.go:877] updating cluster {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:10.369893   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:59:10.369951   65699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:10.409975   65699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 21:59:10.410001   65699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:59:10.410062   65699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.410074   65699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.410086   65699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.410122   65699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.410148   65699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.410166   65699 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 21:59:10.410213   65699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.410223   65699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411690   65699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.411695   65699 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 21:59:10.411730   65699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.411747   65699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.411764   65699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.411793   65699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.553195   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 21:59:10.553249   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.555774   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.559123   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.562266   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.571390   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.592690   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.702213   65699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 21:59:10.702265   65699 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.702314   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857028   65699 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 21:59:10.857072   65699 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.857087   65699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 21:59:10.857117   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857146   65699 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.857154   65699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 21:59:10.857180   65699 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.857197   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857214   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857211   65699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 21:59:10.857250   65699 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.857254   65699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 21:59:10.857264   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.857275   65699 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.857282   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857305   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.872164   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.872195   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.872268   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.927043   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.927147   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.927095   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.927219   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.972625   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 21:59:10.972740   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:11.016239   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 21:59:11.016291   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.016356   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:11.016380   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.047703   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 21:59:11.047732   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047784   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047849   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.047952   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.069007   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 21:59:11.069064   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 21:59:11.069095   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 21:59:11.069126   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 21:59:11.069139   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.319858   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320279   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320310   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.320224   66608 retry.go:31] will retry after 253.332307ms: waiting for machine to come up
	I0318 21:59:10.575748   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576242   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576271   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.576194   66608 retry.go:31] will retry after 484.439329ms: waiting for machine to come up
	I0318 21:59:11.061837   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062291   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.062247   66608 retry.go:31] will retry after 520.757249ms: waiting for machine to come up
	I0318 21:59:11.585112   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585541   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585571   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.585485   66608 retry.go:31] will retry after 482.335377ms: waiting for machine to come up
	I0318 21:59:12.068813   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069420   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069456   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:12.069374   66608 retry.go:31] will retry after 936.563875ms: waiting for machine to come up
	I0318 21:59:13.007582   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.007986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.008012   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.007945   66608 retry.go:31] will retry after 864.468016ms: waiting for machine to come up
	I0318 21:59:13.874400   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874910   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874942   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.874875   66608 retry.go:31] will retry after 1.239808671s: waiting for machine to come up
	I0318 21:59:15.116440   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116834   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116855   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:15.116784   66608 retry.go:31] will retry after 1.208141339s: waiting for machine to come up
	I0318 21:59:11.804059   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:14.301199   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:16.301517   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:11.928081   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.330891   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.28291236s)
	I0318 21:59:14.330933   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330948   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.261785854s)
	I0318 21:59:14.330971   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330974   65699 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.402863992s)
	I0318 21:59:14.330979   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.283167958s)
	I0318 21:59:14.330996   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 21:59:14.331011   65699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 21:59:14.331019   65699 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331043   65699 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.331064   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331086   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:14.336430   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327415   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:16.327350   66608 retry.go:31] will retry after 2.24875206s: waiting for machine to come up
	I0318 21:59:18.578068   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578644   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:18.578589   66608 retry.go:31] will retry after 2.267791851s: waiting for machine to come up
	I0318 21:59:18.800406   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:20.800524   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:18.591731   65699 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.255273393s)
	I0318 21:59:18.591789   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 21:59:18.591897   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:18.591937   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.260848845s)
	I0318 21:59:18.591958   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 21:59:18.591986   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:18.592046   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:19.859577   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.267508443s)
	I0318 21:59:19.859608   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 21:59:19.859637   65699 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:19.859641   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.267714811s)
	I0318 21:59:19.859674   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 21:59:19.859685   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.847586   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848099   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848135   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:20.848048   66608 retry.go:31] will retry after 2.918466892s: waiting for machine to come up
	I0318 21:59:23.768491   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.768999   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.769030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:23.768962   66608 retry.go:31] will retry after 4.373256501s: waiting for machine to come up
	I0318 21:59:22.800765   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:24.801392   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:21.944666   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.084944906s)
	I0318 21:59:21.944700   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 21:59:21.944720   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:21.944766   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:24.714752   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.769964684s)
	I0318 21:59:24.714793   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 21:59:24.714827   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:24.714884   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.146019   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146507   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Found IP for machine: 192.168.50.150
	I0318 21:59:28.146533   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserving static IP address...
	I0318 21:59:28.146549   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has current primary IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146939   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.146966   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserved static IP address: 192.168.50.150
	I0318 21:59:28.146986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | skip adding static IP to network mk-default-k8s-diff-port-660775 - found existing host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"}
	I0318 21:59:28.147006   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Getting to WaitForSSH function...
	I0318 21:59:28.147030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for SSH to be available...
	I0318 21:59:28.149408   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149771   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.149799   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149929   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH client type: external
	I0318 21:59:28.149978   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa (-rw-------)
	I0318 21:59:28.150020   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:28.150039   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | About to run SSH command:
	I0318 21:59:28.150050   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | exit 0
	I0318 21:59:28.273437   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:28.273768   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetConfigRaw
	I0318 21:59:28.274402   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.277330   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.277757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277997   65170 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:59:28.278217   65170 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:28.278240   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:28.278435   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.280754   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281149   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.281178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281318   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.281495   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281646   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281796   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.281955   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.282163   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.282185   65170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:28.390614   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:28.390642   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.390896   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:59:28.390923   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.391095   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.394421   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.394838   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.394876   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.395178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.395410   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395775   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.395953   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.396145   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.396160   65170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-660775 && echo "default-k8s-diff-port-660775" | sudo tee /etc/hostname
	I0318 21:59:28.522303   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-660775
	
	I0318 21:59:28.522347   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.525224   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.525667   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525789   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.525961   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526122   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.526471   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.526651   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.526676   65170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-660775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-660775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-660775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:28.641488   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:28.641521   65170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:28.641547   65170 buildroot.go:174] setting up certificates
	I0318 21:59:28.641555   65170 provision.go:84] configureAuth start
	I0318 21:59:28.641564   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.641871   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.644934   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.645301   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645425   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.647753   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648089   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.648119   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648360   65170 provision.go:143] copyHostCerts
	I0318 21:59:28.648423   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:28.648435   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:28.648507   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:28.648620   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:28.648631   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:28.648660   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:28.648731   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:28.648740   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:28.648769   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:28.648829   65170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-660775 san=[127.0.0.1 192.168.50.150 default-k8s-diff-port-660775 localhost minikube]
	I0318 21:59:28.697191   65170 provision.go:177] copyRemoteCerts
	I0318 21:59:28.697253   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:28.697274   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.699919   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700237   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.700269   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700477   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.700694   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.700882   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.701060   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:28.793840   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:28.829285   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 21:59:28.857628   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:59:28.886344   65170 provision.go:87] duration metric: took 244.778215ms to configureAuth
	I0318 21:59:28.886366   65170 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:28.886527   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:59:28.886593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.889885   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890321   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.890351   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890534   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.890721   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.890879   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.891013   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.891190   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.891366   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.891399   65170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:29.189002   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:29.189033   65170 machine.go:97] duration metric: took 910.801375ms to provisionDockerMachine
	I0318 21:59:29.189046   65170 start.go:293] postStartSetup for "default-k8s-diff-port-660775" (driver="kvm2")
	I0318 21:59:29.189058   65170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:29.189083   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.189409   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:29.189438   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.192164   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.192512   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.192866   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.193045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.193190   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.277850   65170 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:29.282886   65170 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:29.282909   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:29.282975   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:29.283065   65170 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:29.283172   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:29.296052   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:29.323906   65170 start.go:296] duration metric: took 134.847993ms for postStartSetup
	I0318 21:59:29.323945   65170 fix.go:56] duration metric: took 20.61742941s for fixHost
	I0318 21:59:29.323969   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.326616   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.326920   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.327063   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.327300   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327472   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327622   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.327853   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:29.328058   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:29.328070   65170 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:29.430348   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799169.377980776
	
	I0318 21:59:29.430377   65170 fix.go:216] guest clock: 1710799169.377980776
	I0318 21:59:29.430386   65170 fix.go:229] Guest: 2024-03-18 21:59:29.377980776 +0000 UTC Remote: 2024-03-18 21:59:29.323950953 +0000 UTC m=+359.071824665 (delta=54.029823ms)
	I0318 21:59:29.430411   65170 fix.go:200] guest clock delta is within tolerance: 54.029823ms
	I0318 21:59:29.430420   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 20.723939352s
	I0318 21:59:29.430450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.430727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:29.433339   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433686   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.433713   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433865   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434308   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434531   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434632   65170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:29.434682   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.434783   65170 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:29.434811   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.437380   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437479   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437731   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437760   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437829   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437880   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.438033   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438170   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438244   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438332   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438393   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438603   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.438694   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.540670   65170 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:29.547318   65170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:29.704221   65170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:29.710762   65170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:29.710832   65170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:29.727820   65170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:29.727838   65170 start.go:494] detecting cgroup driver to use...
	I0318 21:59:29.727905   65170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:29.745750   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:29.760984   65170 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:29.761024   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:29.776639   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:29.791749   65170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:29.914380   65170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:30.096200   65170 docker.go:233] disabling docker service ...
	I0318 21:59:30.096281   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:30.112512   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:30.126090   65170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:30.258617   65170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:30.397700   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:30.420478   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:30.443197   65170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:30.443282   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.455577   65170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:30.455630   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.467898   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.480041   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.492501   65170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:30.505178   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.517657   65170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.537376   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.554749   65170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:30.570281   65170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:30.570352   65170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:30.587991   65170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:30.600354   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:30.744678   65170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:30.902192   65170 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:30.902279   65170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:30.907869   65170 start.go:562] Will wait 60s for crictl version
	I0318 21:59:30.907937   65170 ssh_runner.go:195] Run: which crictl
	I0318 21:59:30.913588   65170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:30.957344   65170 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:30.957431   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:30.991141   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:31.024452   65170 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:59:27.301221   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:29.799576   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:26.781379   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.066468133s)
	I0318 21:59:26.781415   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 21:59:26.781445   65699 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:26.781493   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:27.747707   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 21:59:27.747764   65699 cache_images.go:123] Successfully loaded all cached images
	I0318 21:59:27.747769   65699 cache_images.go:92] duration metric: took 17.337757279s to LoadCachedImages
	I0318 21:59:27.747781   65699 kubeadm.go:928] updating node { 192.168.72.84 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:59:27.747907   65699 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-963041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:27.747986   65699 ssh_runner.go:195] Run: crio config
	I0318 21:59:27.810020   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:27.810048   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:27.810060   65699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:27.810078   65699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963041 NodeName:no-preload-963041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:27.810242   65699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963041"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:27.810327   65699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:59:27.823120   65699 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:27.823172   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:27.834742   65699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:59:27.854365   65699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:59:27.872873   65699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 21:59:27.891245   65699 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:27.895305   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:27.907928   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:28.044997   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:28.064471   65699 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041 for IP: 192.168.72.84
	I0318 21:59:28.064489   65699 certs.go:194] generating shared ca certs ...
	I0318 21:59:28.064503   65699 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:28.064668   65699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:28.064733   65699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:28.064747   65699 certs.go:256] generating profile certs ...
	I0318 21:59:28.064847   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/client.key
	I0318 21:59:28.064927   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key.53f57e82
	I0318 21:59:28.064975   65699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key
	I0318 21:59:28.065090   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:28.065140   65699 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:28.065154   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:28.065190   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:28.065218   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:28.065244   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:28.065292   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:28.066189   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:28.108239   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:28.147385   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:28.191255   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:28.231079   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:59:28.269730   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:59:28.302326   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:28.331762   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:28.359487   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:28.390196   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:28.422323   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:28.452212   65699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:28.476910   65699 ssh_runner.go:195] Run: openssl version
	I0318 21:59:28.483480   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:28.495230   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500728   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500771   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.507487   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:28.520368   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:28.533700   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540767   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540817   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.549380   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:28.566307   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:28.582377   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589139   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589192   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.597396   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:28.610189   65699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:28.616488   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:28.625547   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:28.634680   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:28.643077   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:28.652470   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:28.660641   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:28.669216   65699 kubeadm.go:391] StartCluster: {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:28.669342   65699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:28.669444   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.719357   65699 cri.go:89] found id: ""
	I0318 21:59:28.719427   65699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:28.733158   65699 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:28.733179   65699 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:28.733186   65699 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:28.733234   65699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:28.744804   65699 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:28.745805   65699 kubeconfig.go:125] found "no-preload-963041" server: "https://192.168.72.84:8443"
	I0318 21:59:28.747888   65699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:28.757871   65699 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.84
	I0318 21:59:28.757896   65699 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:28.757918   65699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:28.757964   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.805988   65699 cri.go:89] found id: ""
	I0318 21:59:28.806057   65699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:28.829257   65699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:28.841515   65699 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:28.841543   65699 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:28.841594   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:59:28.853433   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:28.853499   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:28.864593   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:59:28.875236   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:28.875285   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:28.887756   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.898219   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:28.898271   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.909308   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:59:28.919480   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:28.919540   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:28.930305   65699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:28.941125   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:29.056129   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.261585   65699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.205423679s)
	I0318 21:59:30.261614   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.498583   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.589160   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.713046   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:30.713150   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.214160   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.025614   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:31.028381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028758   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:31.028783   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028960   65170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:31.033836   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:31.048652   65170 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:31.048798   65170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:59:31.048853   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:31.089246   65170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:59:31.089322   65170 ssh_runner.go:195] Run: which lz4
	I0318 21:59:31.094026   65170 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:59:31.098900   65170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:59:31.098929   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:59:33.166556   65170 crio.go:462] duration metric: took 2.072562246s to copy over tarball
	I0318 21:59:33.166639   65170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:59:31.810567   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:34.301018   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:36.346463   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:31.714009   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.762157   65699 api_server.go:72] duration metric: took 1.049110677s to wait for apiserver process to appear ...
	I0318 21:59:31.762188   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:31.762210   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:31.762737   65699 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	I0318 21:59:32.263205   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.738750   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.738785   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.738802   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.804061   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.804102   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.804116   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.842097   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.842144   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:35.262351   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.267395   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.267439   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:35.763016   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.775072   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.775109   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.262338   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:36.267165   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:36.267207   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.762879   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.074225   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:37.074263   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:37.262637   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.267514   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 21:59:37.275551   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 21:59:37.275579   65699 api_server.go:131] duration metric: took 5.513383348s to wait for apiserver health ...
	I0318 21:59:37.275590   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:37.275598   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:37.496330   65699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:37.641915   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:37.659277   65699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:37.684019   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:38.075296   65699 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:38.075333   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:38.075353   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:38.075367   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:38.075375   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:38.075388   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:38.075407   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:38.075418   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:38.075429   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:38.075440   65699 system_pods.go:74] duration metric: took 391.399859ms to wait for pod list to return data ...
	I0318 21:59:38.075452   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:38.252627   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:38.252659   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:38.252670   65699 node_conditions.go:105] duration metric: took 177.209294ms to run NodePressure ...
	I0318 21:59:38.252692   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.662257   65699 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670807   65699 kubeadm.go:733] kubelet initialised
	I0318 21:59:38.670836   65699 kubeadm.go:734] duration metric: took 8.550399ms waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670846   65699 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:38.680740   65699 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.689134   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689157   65699 pod_ready.go:81] duration metric: took 8.393104ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.689169   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689178   65699 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.693796   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693815   65699 pod_ready.go:81] duration metric: took 4.628403ms for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.693824   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693829   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.701225   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701245   65699 pod_ready.go:81] duration metric: took 7.410052ms for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.701254   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701262   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.707848   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707871   65699 pod_ready.go:81] duration metric: took 6.598987ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.707882   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707889   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.066641   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066668   65699 pod_ready.go:81] duration metric: took 358.769058ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.066679   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066687   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.466406   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466440   65699 pod_ready.go:81] duration metric: took 399.746217ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.466449   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466455   65699 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.866206   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866232   65699 pod_ready.go:81] duration metric: took 399.76891ms for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.866240   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866247   65699 pod_ready.go:38] duration metric: took 1.195391629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:39.866263   65699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:59:39.879772   65699 ops.go:34] apiserver oom_adj: -16
	I0318 21:59:39.879796   65699 kubeadm.go:591] duration metric: took 11.146603139s to restartPrimaryControlPlane
	I0318 21:59:39.879807   65699 kubeadm.go:393] duration metric: took 11.21059758s to StartCluster
	I0318 21:59:39.879825   65699 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.879915   65699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:59:39.881739   65699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.881970   65699 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:59:39.883934   65699 out.go:177] * Verifying Kubernetes components...
	I0318 21:59:39.882064   65699 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:59:39.882254   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:39.885913   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:39.885924   65699 addons.go:69] Setting metrics-server=true in profile "no-preload-963041"
	I0318 21:59:39.885932   65699 addons.go:69] Setting default-storageclass=true in profile "no-preload-963041"
	I0318 21:59:39.885950   65699 addons.go:234] Setting addon metrics-server=true in "no-preload-963041"
	W0318 21:59:39.885958   65699 addons.go:243] addon metrics-server should already be in state true
	I0318 21:59:39.885966   65699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963041"
	I0318 21:59:39.885918   65699 addons.go:69] Setting storage-provisioner=true in profile "no-preload-963041"
	I0318 21:59:39.885985   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886000   65699 addons.go:234] Setting addon storage-provisioner=true in "no-preload-963041"
	W0318 21:59:39.886052   65699 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:59:39.886075   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886384   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886403   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886437   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886392   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886448   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886438   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.902103   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0318 21:59:39.902574   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.903192   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.903211   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.903568   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.904113   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.904142   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.908122   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 21:59:39.908269   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0318 21:59:39.908566   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.908639   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.909237   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.909251   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.909662   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.909834   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.913534   65699 addons.go:234] Setting addon default-storageclass=true in "no-preload-963041"
	W0318 21:59:39.913558   65699 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:59:39.913586   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.913959   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.913992   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.921260   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.921284   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.921661   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.922725   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.922778   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.925575   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0318 21:59:39.926170   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.926799   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.926819   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.933014   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0318 21:59:39.933066   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.934464   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.934527   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.935441   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.935456   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.936236   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.936821   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.936870   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.936983   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.938986   65699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:39.940103   65699 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:39.940115   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:59:39.940128   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.942712   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943138   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.943168   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943415   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.943574   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.943690   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.943828   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.944813   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0318 21:59:39.961605   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.962117   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.962140   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.962564   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.962745   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.964606   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.970697   65699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.222479   65170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055805805s)
	I0318 21:59:36.222507   65170 crio.go:469] duration metric: took 3.055923767s to extract the tarball
	I0318 21:59:36.222515   65170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:59:36.265990   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:36.314679   65170 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:59:36.314704   65170 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:59:36.314714   65170 kubeadm.go:928] updating node { 192.168.50.150 8444 v1.28.4 crio true true} ...
	I0318 21:59:36.314828   65170 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-660775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:36.314900   65170 ssh_runner.go:195] Run: crio config
	I0318 21:59:36.375889   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:36.375908   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:36.375916   65170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:36.375935   65170 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-660775 NodeName:default-k8s-diff-port-660775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:36.376058   65170 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-660775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:36.376117   65170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:59:36.387851   65170 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:36.387905   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:36.398095   65170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0318 21:59:36.416507   65170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:59:36.437165   65170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0318 21:59:36.458125   65170 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:36.462688   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:36.476913   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:36.629523   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:36.648679   65170 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775 for IP: 192.168.50.150
	I0318 21:59:36.648697   65170 certs.go:194] generating shared ca certs ...
	I0318 21:59:36.648717   65170 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:36.648870   65170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:36.648942   65170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:36.648956   65170 certs.go:256] generating profile certs ...
	I0318 21:59:36.649061   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/client.key
	I0318 21:59:36.649136   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key.6eb93750
	I0318 21:59:36.649181   65170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key
	I0318 21:59:36.649342   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:36.649408   65170 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:36.649427   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:36.649465   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:36.649502   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:36.649524   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:36.649563   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:36.650116   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:36.709130   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:36.777530   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:36.822349   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:36.861155   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 21:59:36.899264   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:59:36.930697   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:36.960715   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:36.992062   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:37.020001   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:37.051443   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:37.080115   65170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:37.102221   65170 ssh_runner.go:195] Run: openssl version
	I0318 21:59:37.111020   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:37.127447   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132675   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132730   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.139092   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:37.151349   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:37.166470   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172601   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172656   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.179404   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:37.192628   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:37.206758   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211839   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211882   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.218285   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:37.230291   65170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:37.235312   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:37.242399   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:37.249658   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:37.256458   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:37.263110   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:37.270329   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:37.277040   65170 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:37.277140   65170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:37.277176   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.320525   65170 cri.go:89] found id: ""
	I0318 21:59:37.320595   65170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:37.332584   65170 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:37.332602   65170 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:37.332608   65170 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:37.332678   65170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:37.348017   65170 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:37.349557   65170 kubeconfig.go:125] found "default-k8s-diff-port-660775" server: "https://192.168.50.150:8444"
	I0318 21:59:37.352826   65170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:37.367223   65170 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.150
	I0318 21:59:37.367256   65170 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:37.367267   65170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:37.367315   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.411319   65170 cri.go:89] found id: ""
	I0318 21:59:37.411401   65170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:37.431545   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:37.442587   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:37.442610   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:37.442661   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 21:59:37.452384   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:37.452439   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:37.462519   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 21:59:37.472669   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:37.472728   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:37.483107   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.493177   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:37.493224   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.503546   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 21:59:37.513471   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:37.513512   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:37.524147   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:37.534940   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:37.665308   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.882330   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216992532s)
	I0318 21:59:38.882356   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.110948   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.217267   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.332300   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:39.332389   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.833190   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.972027   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 21:59:39.972078   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 21:59:39.972109   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.975122   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975608   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.975627   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975994   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.976196   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.976371   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.976663   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.982859   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0318 21:59:39.983263   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.983860   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.983904   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.984308   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.984558   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.986338   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.986645   65699 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:39.986690   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:59:39.986718   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.989398   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989741   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.989999   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989951   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.990229   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.990392   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.990517   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:40.115233   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:40.136271   65699 node_ready.go:35] waiting up to 6m0s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:40.232668   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:40.234394   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 21:59:40.234417   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 21:59:40.256237   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:40.301845   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 21:59:40.301873   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 21:59:40.354405   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:40.354435   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 21:59:40.377996   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:41.389416   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.156705132s)
	I0318 21:59:41.389429   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133120616s)
	I0318 21:59:41.389470   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389475   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389482   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389486   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389763   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389783   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389792   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389799   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389828   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.389874   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389890   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389899   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389938   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.390199   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390398   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.390339   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390375   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390451   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390470   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.397714   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.397736   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.397951   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.397999   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.398017   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.415620   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037584799s)
	I0318 21:59:41.415673   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.415684   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.415964   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.415992   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.416007   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416016   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.416027   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.416207   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.416220   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416229   65699 addons.go:470] Verifying addon metrics-server=true in "no-preload-963041"
	I0318 21:59:41.418761   65699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 21:59:38.798943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:40.800913   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:41.420038   65699 addons.go:505] duration metric: took 1.537986468s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 21:59:40.332810   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.411342   65170 api_server.go:72] duration metric: took 1.079036948s to wait for apiserver process to appear ...
	I0318 21:59:40.411371   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:40.411394   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:40.411932   65170 api_server.go:269] stopped: https://192.168.50.150:8444/healthz: Get "https://192.168.50.150:8444/healthz": dial tcp 192.168.50.150:8444: connect: connection refused
	I0318 21:59:40.911545   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.377410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.377443   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.377471   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.426410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.426468   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.426485   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.448464   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.448523   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:43.912498   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.918271   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.918309   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.411824   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.422200   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:44.422223   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.911509   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.916884   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 21:59:44.928835   65170 api_server.go:141] control plane version: v1.28.4
	I0318 21:59:44.928862   65170 api_server.go:131] duration metric: took 4.517483413s to wait for apiserver health ...
	I0318 21:59:44.928872   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:44.928881   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:44.930794   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.932164   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:44.959217   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:45.002449   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:45.017348   65170 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:45.017394   65170 system_pods.go:61] "coredns-5dd5756b68-cjq2v" [9ae899ef-63e4-407d-9013-71552ec87614] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:45.017407   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [286b98ba-bc9e-4e2f-984c-d7b2447aef15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:45.017417   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [7a0db461-f8d5-4331-993e-d7b9345159e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:45.017428   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [e4f5859a-dfcc-41d8-9a17-acb601449821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:45.017443   65170 system_pods.go:61] "kube-proxy-qt2m6" [c3c7c6db-4935-4079-b0e7-60ba2cd886b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:45.017450   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [7115eef0-5ff4-4dfe-9135-88ad8f698e43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:45.017461   65170 system_pods.go:61] "metrics-server-57f55c9bc5-5dtf5" [b19191ee-e2db-4392-82e2-1a95fae76101] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:45.017489   65170 system_pods.go:61] "storage-provisioner" [045d4b30-47a3-4c80-a9e8-c36ef7395e6c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:45.017498   65170 system_pods.go:74] duration metric: took 15.027239ms to wait for pod list to return data ...
	I0318 21:59:45.017511   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:45.020962   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:45.020982   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:45.020991   65170 node_conditions.go:105] duration metric: took 3.47292ms to run NodePressure ...
	I0318 21:59:45.021007   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:45.277662   65170 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282939   65170 kubeadm.go:733] kubelet initialised
	I0318 21:59:45.282958   65170 kubeadm.go:734] duration metric: took 5.277143ms waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282965   65170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.289546   65170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:43.299509   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:45.300875   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:42.142145   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:44.641863   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:45.640660   65699 node_ready.go:49] node "no-preload-963041" has status "Ready":"True"
	I0318 21:59:45.640686   65699 node_ready.go:38] duration metric: took 5.50437071s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:45.640697   65699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.647087   65699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652062   65699 pod_ready.go:92] pod "coredns-76f75df574-6mtzp" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.652081   65699 pod_ready.go:81] duration metric: took 4.969873ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652091   65699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.296790   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298834   65170 pod_ready.go:81] duration metric: took 9.259848ms for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.298849   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298868   65170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.307325   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307367   65170 pod_ready.go:81] duration metric: took 8.486967ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.307380   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307389   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.319473   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319498   65170 pod_ready.go:81] duration metric: took 12.100242ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.319514   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319522   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.407356   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407379   65170 pod_ready.go:81] duration metric: took 87.846686ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.407390   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407395   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806835   65170 pod_ready.go:92] pod "kube-proxy-qt2m6" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.806866   65170 pod_ready.go:81] duration metric: took 399.462221ms for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806878   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:47.814286   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:47.799616   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:50.300118   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:46.659819   65699 pod_ready.go:92] pod "etcd-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:46.659855   65699 pod_ready.go:81] duration metric: took 1.007755238s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:46.659868   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:48.669033   65699 pod_ready.go:102] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:51.168202   65699 pod_ready.go:92] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.168229   65699 pod_ready.go:81] duration metric: took 4.508354098s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.168240   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174243   65699 pod_ready.go:92] pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.174268   65699 pod_ready.go:81] duration metric: took 6.018685ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174280   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179279   65699 pod_ready.go:92] pod "kube-proxy-kkrzx" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.179300   65699 pod_ready.go:81] duration metric: took 5.012711ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179311   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185651   65699 pod_ready.go:92] pod "kube-scheduler-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.185670   65699 pod_ready.go:81] duration metric: took 6.351567ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185678   65699 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.315135   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.814432   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.798645   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:54.800561   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:53.191834   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.192346   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 21:59:55.313448   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.813863   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:56.813885   65170 pod_ready.go:81] duration metric: took 11.006997984s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:56.813893   65170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:58.820535   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.802709   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:59.299235   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:01.299761   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:57.694309   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:00.192594   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:00.821070   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.821300   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:03.301922   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.799844   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.693235   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:04.693324   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:05.322655   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.820557   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.821227   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.799881   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.802803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:06.694723   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:11.821593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:13.821792   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.299680   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.300073   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:11.693179   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.194532   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.822593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.321691   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.798687   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.802403   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.302525   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.692709   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.692875   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:20.693620   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:20.820297   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:22.821250   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:24.825337   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.800104   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:26.299105   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.193665   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.194188   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:27.321417   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:29.326924   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:28.798399   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.800456   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:27.692517   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.194633   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:31.821301   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:34.322210   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:33.299227   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.800314   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:32.693924   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.191865   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:36.324995   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.820800   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.298887   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.299570   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.193636   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:39.693273   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:40.821224   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:43.319945   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.300484   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.300924   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.302264   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.192508   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.192743   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:45.321277   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:47.821205   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.822803   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:48.799135   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.801798   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.692554   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.193102   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:51.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:52.320478   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:54.321815   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.301031   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:55.799954   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.693639   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.193657   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.820977   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.822348   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:57.804405   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.299016   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.692166   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.692526   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:01.319955   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:03.324127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.299297   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:04.299721   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.694116   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.193971   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.821257   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.320779   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:06.799599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.800534   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.301906   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:07.195710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:09.691770   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:10.321379   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:12.321708   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.324578   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:13.303344   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:15.799107   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.692795   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.192711   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:16.192974   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:16.820809   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.820856   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:17.800874   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.301479   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.691838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.692582   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:20.821237   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.320180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:22.302559   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:24.800442   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.193491   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:25.692671   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:25.322575   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.323097   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.821127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.302001   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.799369   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.693253   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.192347   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:32.324221   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.821547   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.301599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.798017   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.692835   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.693519   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:37.323580   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.325038   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.798108   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:38.799072   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:40.799797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.694495   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.192489   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:41.193100   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:41.821043   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:44.322040   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:42.802267   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.301098   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:43.692766   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.693595   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.322167   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.322495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:47.800006   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.298196   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.193259   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.195154   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:50.821598   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:53.321546   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.302659   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:54.799292   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.692794   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.192968   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.821471   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.822495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.299408   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.299964   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.193704   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.194959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:00.321126   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:02.321899   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.798818   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:03.800605   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:05.801459   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.693520   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.192449   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:06.192843   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:06.821775   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:09.322004   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.299572   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:10.799620   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.693106   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.192341   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.821139   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.821610   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.299590   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.303956   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.692089   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.693750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:16.320827   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:18.321654   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.800563   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.300888   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.694101   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.191376   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:20.322586   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.820488   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.820555   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.798797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.799501   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.192760   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.193279   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:26.821222   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.321682   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:27.302631   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.799175   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.693239   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.192207   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.193072   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:31.820868   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.822075   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.802719   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:34.301730   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.692959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.194871   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.320427   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.321178   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.799943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:39.300691   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.699873   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.193658   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:40.321388   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:42.822538   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.799159   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.800027   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.299156   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.693317   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.194419   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:45.322637   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:47.820481   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.300259   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.798429   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.693209   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.693611   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:50.322017   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.820364   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:54.820692   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.799942   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.301665   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.694058   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.194171   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:56.821255   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:58.828524   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.303516   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.791851   65211 pod_ready.go:81] duration metric: took 4m0.000068811s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	E0318 22:02:57.791889   65211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:02:57.791913   65211 pod_ready.go:38] duration metric: took 4m13.55705031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:02:57.791938   65211 kubeadm.go:591] duration metric: took 4m20.862001116s to restartPrimaryControlPlane
	W0318 22:02:57.792000   65211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:02:57.792027   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:02:57.692975   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:59.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:01.321045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:03.322008   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.694585   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:04.195750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:05.322087   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:07.820940   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:09.822652   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:06.693803   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:08.693856   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:10.694375   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:12.321504   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:14.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:13.192173   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:15.193816   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:16.822327   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.322059   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:17.691761   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.691867   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.322674   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.823374   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.692710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.695045   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.192838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.322370   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:28.820807   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.165008   65211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.372946393s)
	I0318 22:03:30.165087   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:30.184259   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:30.198417   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:30.210595   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:30.210624   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:30.210675   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:30.222159   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:30.222210   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:30.234099   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:30.244546   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:30.244621   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:30.255192   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.265777   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:30.265833   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.276674   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:30.286349   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:30.286402   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:30.296530   65211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:30.522414   65211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:03:28.193120   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.194300   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:31.321986   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:33.823045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:32.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:34.693824   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.294937   65211 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:03:39.295015   65211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:39.295142   65211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:39.295296   65211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:39.295451   65211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:39.295550   65211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:39.297047   65211 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:39.297135   65211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:39.297250   65211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:39.297368   65211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:39.297461   65211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:39.297557   65211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:39.297640   65211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:39.297742   65211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:39.297831   65211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:39.297939   65211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:39.298032   65211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:39.298084   65211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:39.298206   65211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:39.298301   65211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:39.298376   65211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:39.298451   65211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:39.298518   65211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:39.298612   65211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:39.298693   65211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:39.299829   65211 out.go:204]   - Booting up control plane ...
	I0318 22:03:39.299959   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:39.300052   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:39.300150   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:39.300308   65211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:39.300444   65211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:39.300496   65211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:39.300713   65211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:39.300829   65211 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003359 seconds
	I0318 22:03:39.300997   65211 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:03:39.301155   65211 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:03:39.301228   65211 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:03:39.301451   65211 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-141758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:03:39.301526   65211 kubeadm.go:309] [bootstrap-token] Using token: p114v6.erax4pf5xkn6x2it
	I0318 22:03:39.302903   65211 out.go:204]   - Configuring RBAC rules ...
	I0318 22:03:39.303025   65211 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:03:39.303133   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:03:39.303301   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:03:39.303479   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:03:39.303574   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:03:39.303651   65211 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:03:39.303810   65211 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:03:39.303886   65211 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:03:39.303960   65211 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:03:39.303972   65211 kubeadm.go:309] 
	I0318 22:03:39.304041   65211 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:03:39.304050   65211 kubeadm.go:309] 
	I0318 22:03:39.304158   65211 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:03:39.304173   65211 kubeadm.go:309] 
	I0318 22:03:39.304208   65211 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:03:39.304292   65211 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:03:39.304368   65211 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:03:39.304377   65211 kubeadm.go:309] 
	I0318 22:03:39.304456   65211 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:03:39.304465   65211 kubeadm.go:309] 
	I0318 22:03:39.304547   65211 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:03:39.304570   65211 kubeadm.go:309] 
	I0318 22:03:39.304649   65211 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:03:39.304754   65211 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:03:39.304861   65211 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:03:39.304878   65211 kubeadm.go:309] 
	I0318 22:03:39.305028   65211 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:03:39.305134   65211 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:03:39.305144   65211 kubeadm.go:309] 
	I0318 22:03:39.305248   65211 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305390   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:03:39.305422   65211 kubeadm.go:309] 	--control-plane 
	I0318 22:03:39.305430   65211 kubeadm.go:309] 
	I0318 22:03:39.305545   65211 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:03:39.305556   65211 kubeadm.go:309] 
	I0318 22:03:39.305676   65211 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305843   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:03:39.305859   65211 cni.go:84] Creating CNI manager for ""
	I0318 22:03:39.305873   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:03:39.307416   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:03:36.323956   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:38.821180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.308819   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:03:39.375416   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:03:39.434235   65211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:03:39.434303   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.434360   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-141758 minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=embed-certs-141758 minikube.k8s.io/primary=true
	I0318 22:03:39.677778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.708540   65211 ops.go:34] apiserver oom_adj: -16
	I0318 22:03:40.178803   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:40.678832   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:37.193451   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.193667   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:40.821359   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.323788   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:41.678334   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.177921   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.678115   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.178034   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.678655   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.177993   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.678581   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.177929   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.678124   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:46.178423   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.693587   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.693965   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.195060   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:45.821472   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:47.822362   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.678288   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.178394   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.678824   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.678144   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.178090   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.678295   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.178829   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.677856   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:51.177778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.197085   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:50.693056   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:51.192418   65699 pod_ready.go:81] duration metric: took 4m0.006727095s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:51.192452   65699 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0318 22:03:51.192462   65699 pod_ready.go:38] duration metric: took 4m5.551753918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:51.192480   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:51.192514   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:51.192574   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:51.248553   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:51.248575   65699 cri.go:89] found id: ""
	I0318 22:03:51.248583   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:51.248634   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.254205   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:51.254270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:51.303508   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.303534   65699 cri.go:89] found id: ""
	I0318 22:03:51.303543   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:51.303600   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.310160   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:51.310212   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:51.357409   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:51.357429   65699 cri.go:89] found id: ""
	I0318 22:03:51.357436   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:51.357480   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.362683   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:51.362744   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:51.413520   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.413550   65699 cri.go:89] found id: ""
	I0318 22:03:51.413560   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:51.413619   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.419412   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:51.419483   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:51.468338   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:51.468365   65699 cri.go:89] found id: ""
	I0318 22:03:51.468374   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:51.468432   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.474006   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:51.474070   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:51.520166   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:51.520188   65699 cri.go:89] found id: ""
	I0318 22:03:51.520195   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:51.520246   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.526087   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:51.526148   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:51.570735   65699 cri.go:89] found id: ""
	I0318 22:03:51.570761   65699 logs.go:276] 0 containers: []
	W0318 22:03:51.570772   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:51.570779   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:51.570832   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:51.678380   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.178543   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.677807   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.814739   65211 kubeadm.go:1107] duration metric: took 13.380493852s to wait for elevateKubeSystemPrivileges
	W0318 22:03:52.814773   65211 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:03:52.814782   65211 kubeadm.go:393] duration metric: took 5m15.94869953s to StartCluster
	I0318 22:03:52.814803   65211 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.814883   65211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:03:52.816928   65211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.817192   65211 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:03:52.818800   65211 out.go:177] * Verifying Kubernetes components...
	I0318 22:03:52.817486   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:03:52.817499   65211 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:03:52.820175   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:03:52.818838   65211 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-141758"
	I0318 22:03:52.820277   65211 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-141758"
	W0318 22:03:52.820288   65211 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:03:52.818844   65211 addons.go:69] Setting metrics-server=true in profile "embed-certs-141758"
	I0318 22:03:52.820369   65211 addons.go:234] Setting addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:52.818848   65211 addons.go:69] Setting default-storageclass=true in profile "embed-certs-141758"
	I0318 22:03:52.820429   65211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-141758"
	I0318 22:03:52.820317   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	W0318 22:03:52.820386   65211 addons.go:243] addon metrics-server should already be in state true
	I0318 22:03:52.820697   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.820821   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820846   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.820872   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820899   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.821079   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.821107   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.839829   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0318 22:03:52.839850   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0318 22:03:52.839992   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840448   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840986   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841010   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841124   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841144   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841148   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841162   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841385   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841428   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841557   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841639   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.842001   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842043   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.842049   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842068   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.845295   65211 addons.go:234] Setting addon default-storageclass=true in "embed-certs-141758"
	W0318 22:03:52.845315   65211 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:03:52.845343   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.845692   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.845736   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.864111   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0318 22:03:52.864141   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0318 22:03:52.864614   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.864688   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.865181   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865199   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865318   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865334   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865556   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866107   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.866147   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.866343   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866630   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.868253   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.870076   65211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:03:52.871315   65211 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:52.871333   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:03:52.871352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.873922   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 22:03:52.874420   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.874924   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.874944   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.875080   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.875194   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.875254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.875346   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.875478   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.875718   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.875733   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.876060   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.876234   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.877582   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.879040   65211 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:03:50.320724   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.321791   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:54.821845   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.880124   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:03:52.880135   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:03:52.880152   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.882530   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.882957   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.882979   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.883230   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.883371   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.883507   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.883638   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.886181   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0318 22:03:52.886563   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.887043   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.887064   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.887416   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.887599   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.888998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.889490   65211 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:52.889504   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:03:52.889519   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.891985   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892380   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.892435   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.892776   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.892949   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.893066   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:53.047557   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:03:53.098470   65211 node_ready.go:35] waiting up to 6m0s for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111074   65211 node_ready.go:49] node "embed-certs-141758" has status "Ready":"True"
	I0318 22:03:53.111093   65211 node_ready.go:38] duration metric: took 12.593803ms for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111102   65211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:53.127297   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:53.167460   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:03:53.167476   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:03:53.199789   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:53.221070   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:53.233431   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:03:53.233452   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:03:53.298339   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:53.298368   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:03:53.415046   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:55.057164   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85734001s)
	I0318 22:03:55.057233   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057252   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.057590   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057601   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.057614   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057634   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057888   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057929   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.064097   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.064111   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.064376   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.064402   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.064418   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.138948   65211 pod_ready.go:92] pod "coredns-5dd5756b68-k675p" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.138968   65211 pod_ready.go:81] duration metric: took 2.011647544s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.138976   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150187   65211 pod_ready.go:92] pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.150204   65211 pod_ready.go:81] duration metric: took 11.222328ms for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150213   65211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157054   65211 pod_ready.go:92] pod "etcd-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.157073   65211 pod_ready.go:81] duration metric: took 6.853876ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157086   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.167962   65211 pod_ready.go:92] pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.167986   65211 pod_ready.go:81] duration metric: took 10.892042ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.168000   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177187   65211 pod_ready.go:92] pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.177204   65211 pod_ready.go:81] duration metric: took 9.197593ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177213   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.515883   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.294780085s)
	I0318 22:03:55.515937   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.515948   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.515952   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.100869127s)
	I0318 22:03:55.515994   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516014   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516301   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516378   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516469   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516481   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516491   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516406   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516451   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516665   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516683   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516691   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516772   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516839   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516867   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519334   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.519340   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.519355   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519365   65211 addons.go:470] Verifying addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:55.520941   65211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 22:03:55.522318   65211 addons.go:505] duration metric: took 2.704813533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 22:03:55.545590   65211 pod_ready.go:92] pod "kube-proxy-jltc7" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.545614   65211 pod_ready.go:81] duration metric: took 368.395697ms for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.545625   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932726   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.932750   65211 pod_ready.go:81] duration metric: took 387.117475ms for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932757   65211 pod_ready.go:38] duration metric: took 2.821645915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:55.932771   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:55.932815   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.969924   65211 api_server.go:72] duration metric: took 3.152691986s to wait for apiserver process to appear ...
	I0318 22:03:55.969955   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.969977   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 22:03:55.976004   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 22:03:55.977450   65211 api_server.go:141] control plane version: v1.28.4
	I0318 22:03:55.977489   65211 api_server.go:131] duration metric: took 7.525909ms to wait for apiserver health ...
	I0318 22:03:55.977499   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:56.138403   65211 system_pods.go:59] 9 kube-system pods found
	I0318 22:03:56.138429   65211 system_pods.go:61] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.138434   65211 system_pods.go:61] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.138438   65211 system_pods.go:61] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.138441   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.138444   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.138448   65211 system_pods.go:61] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.138453   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.138462   65211 system_pods.go:61] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.138519   65211 system_pods.go:61] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:03:56.138532   65211 system_pods.go:74] duration metric: took 161.01924ms to wait for pod list to return data ...
	I0318 22:03:56.138544   65211 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:03:56.331884   65211 default_sa.go:45] found service account: "default"
	I0318 22:03:56.331926   65211 default_sa.go:55] duration metric: took 193.36174ms for default service account to be created ...
	I0318 22:03:56.331937   65211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:03:56.536411   65211 system_pods.go:86] 9 kube-system pods found
	I0318 22:03:56.536443   65211 system_pods.go:89] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.536452   65211 system_pods.go:89] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.536459   65211 system_pods.go:89] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.536466   65211 system_pods.go:89] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.536472   65211 system_pods.go:89] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.536479   65211 system_pods.go:89] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.536486   65211 system_pods.go:89] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.536497   65211 system_pods.go:89] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.536507   65211 system_pods.go:89] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Running
	I0318 22:03:56.536518   65211 system_pods.go:126] duration metric: took 204.57366ms to wait for k8s-apps to be running ...
	I0318 22:03:56.536531   65211 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:03:56.536579   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:56.557315   65211 system_svc.go:56] duration metric: took 20.775851ms WaitForService to wait for kubelet
	I0318 22:03:56.557344   65211 kubeadm.go:576] duration metric: took 3.740121987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:03:56.557375   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:03:51.614216   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:51.614235   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.614239   65699 cri.go:89] found id: ""
	I0318 22:03:51.614245   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:51.614297   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.619100   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.623808   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:51.623827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:51.780027   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:51.780067   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.842134   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:51.842167   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.889769   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:51.889797   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.942502   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:51.942543   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:52.467986   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:52.468043   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:52.518980   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:52.519023   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:52.536546   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:52.536586   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:52.591854   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:52.591894   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:52.640783   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:52.640818   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:52.687934   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:52.687967   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:52.749690   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:52.749726   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:52.807019   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:52.807064   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:55.392930   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.415406   65699 api_server.go:72] duration metric: took 4m15.533409678s to wait for apiserver process to appear ...
	I0318 22:03:55.415435   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.415472   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:55.415523   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:55.474200   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:55.474227   65699 cri.go:89] found id: ""
	I0318 22:03:55.474237   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:55.474295   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.479787   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:55.479907   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:55.532114   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:55.532136   65699 cri.go:89] found id: ""
	I0318 22:03:55.532145   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:55.532202   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.537215   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:55.537270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:55.588633   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:55.588657   65699 cri.go:89] found id: ""
	I0318 22:03:55.588666   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:55.588723   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.595711   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:55.595777   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:55.646684   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:55.646704   65699 cri.go:89] found id: ""
	I0318 22:03:55.646714   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:55.646770   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.651920   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:55.651982   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:55.694948   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:55.694975   65699 cri.go:89] found id: ""
	I0318 22:03:55.694984   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:55.695035   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.700275   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:55.700343   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:55.740536   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:55.740559   65699 cri.go:89] found id: ""
	I0318 22:03:55.740568   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:55.740618   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.745384   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:55.745446   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:55.784614   65699 cri.go:89] found id: ""
	I0318 22:03:55.784645   65699 logs.go:276] 0 containers: []
	W0318 22:03:55.784657   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:55.784664   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:55.784727   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:55.827306   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:55.827334   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:55.827341   65699 cri.go:89] found id: ""
	I0318 22:03:55.827349   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:55.827404   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.832314   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.838497   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:55.838520   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:55.857285   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:55.857319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:55.984597   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:55.984629   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:56.044283   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:56.044339   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:56.100329   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:56.100363   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:56.173231   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:56.173270   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:56.221280   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:56.221310   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:56.274110   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:56.274138   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:56.332863   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:56.332891   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:56.374289   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:56.374317   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:56.423793   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:56.423827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:56.478696   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:56.478734   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:56.518600   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:56.518627   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:56.731788   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:03:56.731810   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 22:03:56.731823   65211 node_conditions.go:105] duration metric: took 174.442649ms to run NodePressure ...
	I0318 22:03:56.731835   65211 start.go:240] waiting for startup goroutines ...
	I0318 22:03:56.731845   65211 start.go:245] waiting for cluster config update ...
	I0318 22:03:56.731857   65211 start.go:254] writing updated cluster config ...
	I0318 22:03:56.732109   65211 ssh_runner.go:195] Run: rm -f paused
	I0318 22:03:56.778660   65211 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:03:56.780431   65211 out.go:177] * Done! kubectl is now configured to use "embed-certs-141758" cluster and "default" namespace by default
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:56.814631   65170 pod_ready.go:81] duration metric: took 4m0.000725499s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:56.814661   65170 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:03:56.814684   65170 pod_ready.go:38] duration metric: took 4m11.531709977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:56.814712   65170 kubeadm.go:591] duration metric: took 4m19.482098142s to restartPrimaryControlPlane
	W0318 22:03:56.814767   65170 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:56.814797   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:59.480665   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 22:03:59.485792   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 22:03:59.487343   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:03:59.487364   65699 api_server.go:131] duration metric: took 4.071921663s to wait for apiserver health ...
	I0318 22:03:59.487375   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:59.487406   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:59.487462   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:59.540845   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:59.540872   65699 cri.go:89] found id: ""
	I0318 22:03:59.540881   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:59.540958   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.547759   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:59.547824   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:59.593015   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:59.593042   65699 cri.go:89] found id: ""
	I0318 22:03:59.593051   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:59.593106   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.598169   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:59.598233   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:59.638484   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:59.638508   65699 cri.go:89] found id: ""
	I0318 22:03:59.638517   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:59.638575   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.643353   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:59.643416   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:59.687190   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:59.687208   65699 cri.go:89] found id: ""
	I0318 22:03:59.687216   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:59.687271   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.692481   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:59.692550   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:59.735798   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:59.735824   65699 cri.go:89] found id: ""
	I0318 22:03:59.735834   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:59.735893   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.742192   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:59.742263   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:59.782961   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:59.782989   65699 cri.go:89] found id: ""
	I0318 22:03:59.783000   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:59.783060   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.788247   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:59.788325   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:59.836955   65699 cri.go:89] found id: ""
	I0318 22:03:59.836983   65699 logs.go:276] 0 containers: []
	W0318 22:03:59.836992   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:59.836998   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:59.837052   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:59.879225   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:59.879250   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:59.879255   65699 cri.go:89] found id: ""
	I0318 22:03:59.879264   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:59.879323   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.884380   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.889289   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:59.889316   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:04:00.307344   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:04:00.307389   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:04:00.325472   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:04:00.325496   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:04:00.388254   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:04:00.388288   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:04:00.430203   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:04:00.430241   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:04:00.476834   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:04:00.476861   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:04:00.532672   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:04:00.532703   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:04:00.572174   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:04:00.572202   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:04:00.624250   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:04:00.624283   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:04:00.688520   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:04:00.688551   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:04:00.764279   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:04:00.764319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:04:00.903231   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:04:00.903262   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:04:00.974836   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:04:00.974869   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:04:03.547135   65699 system_pods.go:59] 8 kube-system pods found
	I0318 22:04:03.547166   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.547172   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.547180   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.547186   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.547193   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.547198   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.547208   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.547214   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.547224   65699 system_pods.go:74] duration metric: took 4.059842092s to wait for pod list to return data ...
	I0318 22:04:03.547233   65699 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:03.554656   65699 default_sa.go:45] found service account: "default"
	I0318 22:04:03.554682   65699 default_sa.go:55] duration metric: took 7.437557ms for default service account to be created ...
	I0318 22:04:03.554692   65699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:03.562342   65699 system_pods.go:86] 8 kube-system pods found
	I0318 22:04:03.562369   65699 system_pods.go:89] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.562374   65699 system_pods.go:89] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.562378   65699 system_pods.go:89] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.562383   65699 system_pods.go:89] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.562387   65699 system_pods.go:89] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.562391   65699 system_pods.go:89] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.562397   65699 system_pods.go:89] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.562402   65699 system_pods.go:89] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.562410   65699 system_pods.go:126] duration metric: took 7.712357ms to wait for k8s-apps to be running ...
	I0318 22:04:03.562424   65699 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:03.562470   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:03.579949   65699 system_svc.go:56] duration metric: took 17.517801ms WaitForService to wait for kubelet
	I0318 22:04:03.579977   65699 kubeadm.go:576] duration metric: took 4m23.697982351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:03.579993   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:03.585009   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:03.585037   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:03.585049   65699 node_conditions.go:105] duration metric: took 5.050614ms to run NodePressure ...
	I0318 22:04:03.585063   65699 start.go:240] waiting for startup goroutines ...
	I0318 22:04:03.585075   65699 start.go:245] waiting for cluster config update ...
	I0318 22:04:03.585089   65699 start.go:254] writing updated cluster config ...
	I0318 22:04:03.585426   65699 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:03.634969   65699 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:04:03.637561   65699 out.go:177] * Done! kubectl is now configured to use "no-preload-963041" cluster and "default" namespace by default
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:29.143869   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.329052492s)
	I0318 22:04:29.143935   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:29.161708   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:04:29.173738   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:04:29.185221   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:04:29.185241   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 22:04:29.185273   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 22:04:29.196326   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:04:29.196382   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:04:29.207305   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 22:04:29.217759   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:04:29.217811   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:04:29.228350   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.239148   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:04:29.239191   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.251191   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 22:04:29.262291   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:04:29.262339   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:04:29.273343   65170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:04:29.332561   65170 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:04:29.333329   65170 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:04:29.496432   65170 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:04:29.496558   65170 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:04:29.496720   65170 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:04:29.728202   65170 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:04:29.730047   65170 out.go:204]   - Generating certificates and keys ...
	I0318 22:04:29.730126   65170 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:04:29.730202   65170 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:04:29.730297   65170 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:04:29.730669   65170 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:04:29.731209   65170 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:04:29.731887   65170 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:04:29.732569   65170 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:04:29.733362   65170 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:04:29.734045   65170 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:04:29.734477   65170 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:04:29.735264   65170 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:04:29.735340   65170 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:04:30.122363   65170 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:04:30.296021   65170 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:04:30.555774   65170 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:04:30.674403   65170 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:04:30.674943   65170 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:04:30.677509   65170 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:04:30.679219   65170 out.go:204]   - Booting up control plane ...
	I0318 22:04:30.679319   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:04:30.679402   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:04:30.681975   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:04:30.701015   65170 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:04:30.701902   65170 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:04:30.702104   65170 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:04:30.843019   65170 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:04:36.846312   65170 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002976 seconds
	I0318 22:04:36.846520   65170 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:04:36.870892   65170 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:04:37.410373   65170 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:04:37.410649   65170 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-660775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:04:37.935730   65170 kubeadm.go:309] [bootstrap-token] Using token: jwgiie.tp4r5ug6emevtbxj
	I0318 22:04:37.937024   65170 out.go:204]   - Configuring RBAC rules ...
	I0318 22:04:37.937156   65170 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:04:37.943204   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:04:37.951400   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:04:37.958005   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:04:37.962013   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:04:37.965783   65170 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:04:37.985150   65170 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:04:38.241561   65170 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:04:38.355495   65170 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:04:38.356452   65170 kubeadm.go:309] 
	I0318 22:04:38.356511   65170 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:04:38.356520   65170 kubeadm.go:309] 
	I0318 22:04:38.356598   65170 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:04:38.356609   65170 kubeadm.go:309] 
	I0318 22:04:38.356667   65170 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:04:38.356774   65170 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:04:38.356828   65170 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:04:38.356844   65170 kubeadm.go:309] 
	I0318 22:04:38.356898   65170 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:04:38.356916   65170 kubeadm.go:309] 
	I0318 22:04:38.356976   65170 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:04:38.356984   65170 kubeadm.go:309] 
	I0318 22:04:38.357030   65170 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:04:38.357093   65170 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:04:38.357161   65170 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:04:38.357168   65170 kubeadm.go:309] 
	I0318 22:04:38.357263   65170 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:04:38.357364   65170 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:04:38.357376   65170 kubeadm.go:309] 
	I0318 22:04:38.357495   65170 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.357657   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:04:38.357707   65170 kubeadm.go:309] 	--control-plane 
	I0318 22:04:38.357724   65170 kubeadm.go:309] 
	I0318 22:04:38.357861   65170 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:04:38.357873   65170 kubeadm.go:309] 
	I0318 22:04:38.357986   65170 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.358144   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:04:38.358726   65170 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:38.358772   65170 cni.go:84] Creating CNI manager for ""
	I0318 22:04:38.358789   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:04:38.360246   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:04:38.361264   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:04:38.378420   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:04:38.482111   65170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:04:38.482178   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:38.482194   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-660775 minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=default-k8s-diff-port-660775 minikube.k8s.io/primary=true
	I0318 22:04:38.617420   65170 ops.go:34] apiserver oom_adj: -16
	I0318 22:04:38.828087   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.328292   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.828411   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.328829   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.828338   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.329118   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.828239   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.328296   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.828241   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.329151   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.829036   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.328224   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.828465   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.328632   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.828289   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.328321   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.828493   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.329008   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.828789   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.328727   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.829024   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.329010   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.828311   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.328474   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.445593   65170 kubeadm.go:1107] duration metric: took 11.963480655s to wait for elevateKubeSystemPrivileges
	W0318 22:04:50.445640   65170 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:04:50.445651   65170 kubeadm.go:393] duration metric: took 5m13.168616417s to StartCluster
	I0318 22:04:50.445672   65170 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.445754   65170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:04:50.447789   65170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.448086   65170 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:04:50.449989   65170 out.go:177] * Verifying Kubernetes components...
	I0318 22:04:50.448238   65170 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:04:50.450030   65170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450044   65170 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450068   65170 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-660775"
	I0318 22:04:50.450070   65170 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.450078   65170 addons.go:243] addon storage-provisioner should already be in state true
	W0318 22:04:50.450082   65170 addons.go:243] addon metrics-server should already be in state true
	I0318 22:04:50.450105   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450116   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450033   65170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450181   65170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-660775"
	I0318 22:04:50.450493   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450516   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450585   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450628   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.448310   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:04:50.452465   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:04:50.466764   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0318 22:04:50.468214   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0318 22:04:50.468460   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.468676   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469019   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469038   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469182   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469195   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469254   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0318 22:04:50.469549   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.469605   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469603   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470035   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.470053   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.470320   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470350   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470381   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470385   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470395   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470535   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.473854   65170 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.473879   65170 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:04:50.473907   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.474268   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.474301   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.485707   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0318 22:04:50.486097   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0318 22:04:50.486278   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486675   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486809   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.486818   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487074   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.487086   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487345   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487513   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487561   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.487759   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.489284   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.491084   65170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:04:50.489730   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.492156   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0318 22:04:50.492539   65170 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.492549   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:04:50.492563   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.494057   65170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:04:50.492998   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.495232   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:04:50.495253   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:04:50.495275   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.495863   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.495887   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.495952   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.496340   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496476   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.496620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.496757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.496861   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.497350   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.498004   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.498047   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.498450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.499027   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499235   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.499406   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.499565   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.499691   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.515126   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0318 22:04:50.515913   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.516473   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.516498   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.516800   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.517008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.518559   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.518811   65170 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.518825   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:04:50.518842   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.522625   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523156   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.523537   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523810   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.523984   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.524193   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.524430   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.682066   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:04:50.699269   65170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709309   65170 node_ready.go:49] node "default-k8s-diff-port-660775" has status "Ready":"True"
	I0318 22:04:50.709330   65170 node_ready.go:38] duration metric: took 10.026001ms for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709342   65170 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.713958   65170 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720434   65170 pod_ready.go:92] pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.720459   65170 pod_ready.go:81] duration metric: took 6.477329ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720471   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725799   65170 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.725820   65170 pod_ready.go:81] duration metric: took 5.341405ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725829   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.730987   65170 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.731006   65170 pod_ready.go:81] duration metric: took 5.171376ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.731016   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737458   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.737481   65170 pod_ready.go:81] duration metric: took 6.458242ms for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737490   65170 pod_ready.go:38] duration metric: took 28.137606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.737506   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:04:50.737560   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:04:50.757770   65170 api_server.go:72] duration metric: took 309.622189ms to wait for apiserver process to appear ...
	I0318 22:04:50.757795   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:04:50.757815   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 22:04:50.765732   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 22:04:50.769202   65170 api_server.go:141] control plane version: v1.28.4
	I0318 22:04:50.769228   65170 api_server.go:131] duration metric: took 11.424563ms to wait for apiserver health ...
	I0318 22:04:50.769238   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:04:50.831223   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.859994   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:04:50.860014   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:04:50.864994   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.905212   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:04:50.905257   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:04:50.918389   65170 system_pods.go:59] 4 kube-system pods found
	I0318 22:04:50.918416   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:50.918422   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:50.918426   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:50.918429   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:50.918435   65170 system_pods.go:74] duration metric: took 149.190745ms to wait for pod list to return data ...
	I0318 22:04:50.918442   65170 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:50.993150   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:50.993174   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:04:51.056974   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:51.124585   65170 default_sa.go:45] found service account: "default"
	I0318 22:04:51.124612   65170 default_sa.go:55] duration metric: took 206.163161ms for default service account to be created ...
	I0318 22:04:51.124624   65170 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:51.347373   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.347408   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347419   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347426   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.347433   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.347440   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.347452   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.347458   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.347478   65170 retry.go:31] will retry after 201.830143ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.556559   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.556594   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556605   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556621   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.556630   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.556638   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.556648   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.556663   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.556681   65170 retry.go:31] will retry after 312.139871ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.878515   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.878546   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878554   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878562   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.878568   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.878573   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.878579   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.878582   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.878596   65170 retry.go:31] will retry after 379.864885ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.364944   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.364971   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364979   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364987   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.364995   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.365002   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.365011   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:52.365018   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.365039   65170 retry.go:31] will retry after 598.040475ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.752856   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921596456s)
	I0318 22:04:52.752915   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.752928   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753278   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753303   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.753314   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.753323   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753565   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753580   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.781081   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.781102   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.781396   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.781417   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.973228   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.973256   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Running
	I0318 22:04:52.973262   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Running
	I0318 22:04:52.973269   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.973275   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.973282   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.973289   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Running
	I0318 22:04:52.973295   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.973304   65170 system_pods.go:126] duration metric: took 1.848673952s to wait for k8s-apps to be running ...
	I0318 22:04:52.973310   65170 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:52.973361   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:53.343164   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.286142485s)
	I0318 22:04:53.343193   65170 system_svc.go:56] duration metric: took 369.874916ms WaitForService to wait for kubelet
	I0318 22:04:53.343215   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343229   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343216   65170 kubeadm.go:576] duration metric: took 2.89507195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:53.343238   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:53.343265   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478242665s)
	I0318 22:04:53.343301   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343311   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343510   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.343555   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.343564   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.343572   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343580   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345065   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345078   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345082   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345065   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345094   65170 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-660775"
	I0318 22:04:53.345094   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345117   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345127   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.345136   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345401   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345400   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345419   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.347668   65170 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0318 22:04:53.348839   65170 addons.go:505] duration metric: took 2.900603006s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0318 22:04:53.363245   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:53.363274   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:53.363307   65170 node_conditions.go:105] duration metric: took 20.053581ms to run NodePressure ...
	I0318 22:04:53.363325   65170 start.go:240] waiting for startup goroutines ...
	I0318 22:04:53.363339   65170 start.go:245] waiting for cluster config update ...
	I0318 22:04:53.363353   65170 start.go:254] writing updated cluster config ...
	I0318 22:04:53.363674   65170 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:53.429018   65170 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:04:53.430584   65170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-660775" cluster and "default" namespace by default
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.203675985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799619203649106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89f1406a-663a-449d-90ea-202d4b957cf2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.204300008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b53f8aa0-1d66-47d4-a4bd-ddd826630f70 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.204383777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b53f8aa0-1d66-47d4-a4bd-ddd826630f70 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.204418301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b53f8aa0-1d66-47d4-a4bd-ddd826630f70 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.246124211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99536d20-9a3b-44bb-ac0e-933aab01c604 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.246199016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99536d20-9a3b-44bb-ac0e-933aab01c604 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.247420469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b37a1909-b09f-45ed-9c73-bf3d08a25294 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.247842185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799619247816368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b37a1909-b09f-45ed-9c73-bf3d08a25294 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.248764897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce5e1a80-cbd8-4336-ad05-90dd8fe812c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.248813059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce5e1a80-cbd8-4336-ad05-90dd8fe812c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.248844096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce5e1a80-cbd8-4336-ad05-90dd8fe812c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.285310740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e49db0d-9ebe-48a7-ab01-c1b4cc58f87a name=/runtime.v1.RuntimeService/Version
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.285381006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e49db0d-9ebe-48a7-ab01-c1b4cc58f87a name=/runtime.v1.RuntimeService/Version
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.286799930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11180984-9bb6-424a-93cf-0f6fbc6cc8e6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.287247624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799619287226606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11180984-9bb6-424a-93cf-0f6fbc6cc8e6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.287774266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfd064a0-f32e-42dd-ac1f-67d706839792 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.287826008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfd064a0-f32e-42dd-ac1f-67d706839792 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.287855373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cfd064a0-f32e-42dd-ac1f-67d706839792 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.322920088Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=329b1240-bd60-4de6-9b39-ecd8be7e1009 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.322987114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=329b1240-bd60-4de6-9b39-ecd8be7e1009 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.323936122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c811ceb9-8be3-43df-8547-ae4870c01406 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.324438989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799619324404455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c811ceb9-8be3-43df-8547-ae4870c01406 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.324959066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd144294-78b9-49af-8140-a348fde293cf name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.325003328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd144294-78b9-49af-8140-a348fde293cf name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:06:59 old-k8s-version-648232 crio[655]: time="2024-03-18 22:06:59.325039042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dd144294-78b9-49af-8140-a348fde293cf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 21:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055911] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044955] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.805580] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387906] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.740121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.008894] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062333] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062747] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.184160] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.169340] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.291707] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +7.214394] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.068357] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.838802] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Mar18 21:59] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 22:03] systemd-fstab-generator[4983]: Ignoring "noauto" option for root device
	[Mar18 22:05] systemd-fstab-generator[5260]: Ignoring "noauto" option for root device
	[  +0.070654] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:06:59 up 8 min,  0 users,  load average: 0.16, 0.13, 0.08
	Linux old-k8s-version-648232 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: net.(*sysDialer).dialSerial(0xc000906000, 0x4f7fe40, 0xc000c5f260, 0xc000c4a870, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/dial.go:548 +0x152
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: net.(*Dialer).DialContext(0xc000c19ce0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c435c0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c35ba0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c435c0, 0x24, 0x60, 0x7f3a2d70ce68, 0x118, ...)
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: net/http.(*Transport).dial(0xc00066e000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c435c0, 0x24, 0x0, 0x0, 0x4f0b860, ...)
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: net/http.(*Transport).dialConn(0xc00066e000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000c7e540, 0x5, 0xc000c435c0, 0x24, 0x0, 0xc000c32120, ...)
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: net/http.(*Transport).dialConnFor(0xc00066e000, 0xc0000d2370)
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]: created by net/http.(*Transport).queueForDial
	Mar 18 22:06:57 old-k8s-version-648232 kubelet[5444]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 18 22:06:57 old-k8s-version-648232 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 22:06:57 old-k8s-version-648232 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 22:06:58 old-k8s-version-648232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 18 22:06:58 old-k8s-version-648232 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 22:06:58 old-k8s-version-648232 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 22:06:58 old-k8s-version-648232 kubelet[5519]: I0318 22:06:58.526471    5519 server.go:416] Version: v1.20.0
	Mar 18 22:06:58 old-k8s-version-648232 kubelet[5519]: I0318 22:06:58.527813    5519 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 22:06:58 old-k8s-version-648232 kubelet[5519]: I0318 22:06:58.535258    5519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 22:06:58 old-k8s-version-648232 kubelet[5519]: I0318 22:06:58.537414    5519 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 18 22:06:58 old-k8s-version-648232 kubelet[5519]: W0318 22:06:58.537802    5519 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (239.279739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-648232" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (750.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141758 -n embed-certs-141758
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 22:12:57.408590317 +0000 UTC m=+6234.310378768
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-141758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-141758 logs -n 25: (2.310451531s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo cat                              | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:54:36.607114   65699 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:36.607254   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607266   65699 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:36.607272   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607706   65699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:36.608596   65699 out.go:298] Setting JSON to false
	I0318 21:54:36.609468   65699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5821,"bootTime":1710793056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:36.609529   65699 start.go:139] virtualization: kvm guest
	I0318 21:54:36.611401   65699 out.go:177] * [no-preload-963041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:36.612703   65699 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:36.612704   65699 notify.go:220] Checking for updates...
	I0318 21:54:36.613976   65699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:36.615157   65699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:36.616283   65699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:36.617431   65699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:36.618615   65699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:36.620094   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:54:36.620490   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.620537   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.634914   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0318 21:54:36.635251   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.635706   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.635728   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.636019   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.636173   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.636411   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:36.636719   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.636756   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.650608   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0318 21:54:36.650946   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.651358   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.651383   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.651694   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.651832   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.682407   65699 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:36.683826   65699 start.go:297] selected driver: kvm2
	I0318 21:54:36.683837   65699 start.go:901] validating driver "kvm2" against &{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.683941   65699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:36.684624   65699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.684696   65699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:36.699415   65699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:36.699766   65699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:36.699827   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:54:36.699840   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:36.699883   65699 start.go:340] cluster config:
	{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.699984   65699 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.701584   65699 out.go:177] * Starting "no-preload-963041" primary control-plane node in "no-preload-963041" cluster
	I0318 21:54:36.702792   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:54:36.702911   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:54:36.703027   65699 cache.go:107] acquiring lock: {Name:mk20bcc8d34b80cc44c1e33bc5e0ec5cd82ba46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703044   65699 cache.go:107] acquiring lock: {Name:mk299438a86024ea6c96280d8bbe30c1283fa996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703087   65699 cache.go:107] acquiring lock: {Name:mkf5facbc69c16807f75e75a80a4afa3f97a0ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703124   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0318 21:54:36.703127   65699 start.go:360] acquireMachinesLock for no-preload-963041: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:54:36.703141   65699 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 102.209µs
	I0318 21:54:36.703156   65699 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0318 21:54:36.703104   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 21:54:36.703174   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 21:54:36.703172   65699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.262µs
	I0318 21:54:36.703190   65699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703043   65699 cache.go:107] acquiring lock: {Name:mk4c82b4e60b551671fa99921294b8e1f551d382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703189   65699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 104.037µs
	I0318 21:54:36.703209   65699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 21:54:36.703137   65699 cache.go:107] acquiring lock: {Name:mk847ac7ddb8863389782289e61001579ff6ec5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703204   65699 cache.go:107] acquiring lock: {Name:mk1bf8cc3e30a7cf88f25697f1021501ea6ee4ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703243   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 21:54:36.703254   65699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 163.57µs
	I0318 21:54:36.703233   65699 cache.go:107] acquiring lock: {Name:mkf9c9b33c4d1ca54e3364ad39dcd3b10bc50534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703265   65699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703224   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 21:54:36.703282   65699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 247.672µs
	I0318 21:54:36.703293   65699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 21:54:36.703293   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 21:54:36.703293   65699 cache.go:107] acquiring lock: {Name:mkd0bd00e6f69df37097a8ce792bcc8844efbc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703315   65699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.33µs
	I0318 21:54:36.703329   65699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 21:54:36.703363   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 21:54:36.703385   65699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 207.404µs
	I0318 21:54:36.703400   65699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703411   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 21:54:36.703419   65699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 164.5µs
	I0318 21:54:36.703435   65699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703447   65699 cache.go:87] Successfully saved all images to host disk.
	I0318 21:54:40.421098   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:43.493261   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:49.573105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:52.645158   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:58.725124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:01.797077   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:07.877116   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:10.949096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:17.029117   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:20.101131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:26.181141   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:29.253113   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:35.333097   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:38.405132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:44.485208   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:47.557123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:53.637185   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:56.709102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:02.789134   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:05.861146   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:11.941102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:15.013092   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:21.093132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:24.165129   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:30.245127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:33.317151   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:39.397126   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:42.469163   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:48.549145   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:51.621085   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:57.701118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:00.773108   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:06.853105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:09.925096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:16.005131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:19.077111   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:25.157130   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:28.229107   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:34.309152   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:37.381127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:43.461123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:46.533127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:52.613124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:55.685135   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:01.765118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:04.837197   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:07.840986   65211 start.go:364] duration metric: took 4m36.169318619s to acquireMachinesLock for "embed-certs-141758"
	I0318 21:58:07.841046   65211 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:07.841054   65211 fix.go:54] fixHost starting: 
	I0318 21:58:07.841507   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:07.841544   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:07.856544   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0318 21:58:07.856976   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:07.857424   65211 main.go:141] libmachine: Using API Version  1
	I0318 21:58:07.857452   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:07.857783   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:07.857971   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:07.858126   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:58:07.859909   65211 fix.go:112] recreateIfNeeded on embed-certs-141758: state=Stopped err=<nil>
	I0318 21:58:07.859947   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	W0318 21:58:07.860120   65211 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:07.862134   65211 out.go:177] * Restarting existing kvm2 VM for "embed-certs-141758" ...
	I0318 21:58:07.838706   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:07.838746   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839036   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:58:07.839060   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839263   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:58:07.840867   65170 machine.go:97] duration metric: took 4m37.426711052s to provisionDockerMachine
	I0318 21:58:07.840915   65170 fix.go:56] duration metric: took 4m37.446713188s for fixHost
	I0318 21:58:07.840923   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 4m37.446748943s
	W0318 21:58:07.840945   65170 start.go:713] error starting host: provision: host is not running
	W0318 21:58:07.841017   65170 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 21:58:07.841026   65170 start.go:728] Will try again in 5 seconds ...
	I0318 21:58:07.863352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Start
	I0318 21:58:07.863483   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring networks are active...
	I0318 21:58:07.864202   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network default is active
	I0318 21:58:07.864652   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network mk-embed-certs-141758 is active
	I0318 21:58:07.865077   65211 main.go:141] libmachine: (embed-certs-141758) Getting domain xml...
	I0318 21:58:07.865858   65211 main.go:141] libmachine: (embed-certs-141758) Creating domain...
	I0318 21:58:09.026367   65211 main.go:141] libmachine: (embed-certs-141758) Waiting to get IP...
	I0318 21:58:09.027144   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.027524   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.027580   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.027503   66223 retry.go:31] will retry after 260.499882ms: waiting for machine to come up
	I0318 21:58:09.289935   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.290490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.290522   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.290450   66223 retry.go:31] will retry after 328.000758ms: waiting for machine to come up
	I0318 21:58:09.619947   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.620337   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.620384   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.620305   66223 retry.go:31] will retry after 419.640035ms: waiting for machine to come up
	I0318 21:58:10.041775   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.042186   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.042213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.042134   66223 retry.go:31] will retry after 482.732439ms: waiting for machine to come up
	I0318 21:58:10.526892   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.527282   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.527307   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.527253   66223 retry.go:31] will retry after 718.696645ms: waiting for machine to come up
	I0318 21:58:11.247165   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.247545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.247571   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.247501   66223 retry.go:31] will retry after 603.951593ms: waiting for machine to come up
	I0318 21:58:12.842928   65170 start.go:360] acquireMachinesLock for default-k8s-diff-port-660775: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:11.853119   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.853408   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.853438   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.853362   66223 retry.go:31] will retry after 1.191963995s: waiting for machine to come up
	I0318 21:58:13.046915   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:13.047289   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:13.047319   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:13.047237   66223 retry.go:31] will retry after 1.314666633s: waiting for machine to come up
	I0318 21:58:14.363693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:14.364109   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:14.364135   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:14.364064   66223 retry.go:31] will retry after 1.341191632s: waiting for machine to come up
	I0318 21:58:15.707425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:15.707921   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:15.707951   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:15.707862   66223 retry.go:31] will retry after 1.887572842s: waiting for machine to come up
	I0318 21:58:17.596545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:17.596970   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:17.597002   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:17.596899   66223 retry.go:31] will retry after 2.820006704s: waiting for machine to come up
	I0318 21:58:20.420327   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:20.420693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:20.420714   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:20.420659   66223 retry.go:31] will retry after 3.099836206s: waiting for machine to come up
	I0318 21:58:23.522155   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:23.522490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:23.522517   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:23.522450   66223 retry.go:31] will retry after 4.512794132s: waiting for machine to come up
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:28.036425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.036898   65211 main.go:141] libmachine: (embed-certs-141758) Found IP for machine: 192.168.39.243
	I0318 21:58:28.036949   65211 main.go:141] libmachine: (embed-certs-141758) Reserving static IP address...
	I0318 21:58:28.036967   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has current primary IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.037428   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.037452   65211 main.go:141] libmachine: (embed-certs-141758) DBG | skip adding static IP to network mk-embed-certs-141758 - found existing host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"}
	I0318 21:58:28.037461   65211 main.go:141] libmachine: (embed-certs-141758) Reserved static IP address: 192.168.39.243
	I0318 21:58:28.037473   65211 main.go:141] libmachine: (embed-certs-141758) Waiting for SSH to be available...
	I0318 21:58:28.037485   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Getting to WaitForSSH function...
	I0318 21:58:28.039459   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039778   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.039810   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039928   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH client type: external
	I0318 21:58:28.039955   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa (-rw-------)
	I0318 21:58:28.039995   65211 main.go:141] libmachine: (embed-certs-141758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:28.040027   65211 main.go:141] libmachine: (embed-certs-141758) DBG | About to run SSH command:
	I0318 21:58:28.040044   65211 main.go:141] libmachine: (embed-certs-141758) DBG | exit 0
	I0318 21:58:28.169219   65211 main.go:141] libmachine: (embed-certs-141758) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:28.169554   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetConfigRaw
	I0318 21:58:28.170153   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.172372   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.172760   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.172787   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.173016   65211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:58:28.173186   65211 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:28.173203   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:28.173399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.175433   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175767   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.175802   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175920   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.176079   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176389   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.176553   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.176790   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.176805   65211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:28.285370   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:28.285407   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285629   65211 buildroot.go:166] provisioning hostname "embed-certs-141758"
	I0318 21:58:28.285651   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285856   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.288382   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288708   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.288739   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288863   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.289067   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289220   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289361   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.289515   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.289717   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.289735   65211 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-141758 && echo "embed-certs-141758" | sudo tee /etc/hostname
	I0318 21:58:28.420311   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-141758
	
	I0318 21:58:28.420351   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.422864   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.423245   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423431   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.423608   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423759   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423891   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.424044   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.424234   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.424256   65211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-141758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-141758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-141758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:28.549277   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:28.549307   65211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:28.549325   65211 buildroot.go:174] setting up certificates
	I0318 21:58:28.549334   65211 provision.go:84] configureAuth start
	I0318 21:58:28.549343   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.549572   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.551881   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552183   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.552205   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.554341   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554629   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.554656   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554752   65211 provision.go:143] copyHostCerts
	I0318 21:58:28.554812   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:28.554825   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:28.554912   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:28.555020   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:28.555032   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:28.555062   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:28.555145   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:28.555155   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:28.555192   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:28.555259   65211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-141758 san=[127.0.0.1 192.168.39.243 embed-certs-141758 localhost minikube]
	I0318 21:58:28.706111   65211 provision.go:177] copyRemoteCerts
	I0318 21:58:28.706158   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:28.706185   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.708537   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708795   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.708822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.709164   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.709335   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.709446   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:28.796199   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:28.827207   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:58:28.854273   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:28.880505   65211 provision.go:87] duration metric: took 331.161751ms to configureAuth
	I0318 21:58:28.880524   65211 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:28.880716   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:58:28.880801   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.883232   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.883583   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883753   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.883926   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884087   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884186   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.884339   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.884481   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.884496   65211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:29.164330   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:29.164357   65211 machine.go:97] duration metric: took 991.159236ms to provisionDockerMachine
	I0318 21:58:29.164370   65211 start.go:293] postStartSetup for "embed-certs-141758" (driver="kvm2")
	I0318 21:58:29.164381   65211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:29.164434   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.164734   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:29.164758   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.167400   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167696   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.167719   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167867   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.168065   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.168235   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.168352   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.256141   65211 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:29.261086   65211 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:29.261104   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:29.261157   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:29.261229   65211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:29.261309   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:29.271174   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:29.297161   65211 start.go:296] duration metric: took 132.781067ms for postStartSetup
	I0318 21:58:29.297192   65211 fix.go:56] duration metric: took 21.456139061s for fixHost
	I0318 21:58:29.297208   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.299741   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300102   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.300127   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300289   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.300480   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300750   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.300864   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:29.301028   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:29.301039   65211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:29.413842   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799109.363417589
	
	I0318 21:58:29.413869   65211 fix.go:216] guest clock: 1710799109.363417589
	I0318 21:58:29.413876   65211 fix.go:229] Guest: 2024-03-18 21:58:29.363417589 +0000 UTC Remote: 2024-03-18 21:58:29.297195181 +0000 UTC m=+297.765354372 (delta=66.222408ms)
	I0318 21:58:29.413892   65211 fix.go:200] guest clock delta is within tolerance: 66.222408ms
	I0318 21:58:29.413899   65211 start.go:83] releasing machines lock for "embed-certs-141758", held for 21.572869797s
	I0318 21:58:29.413932   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.414191   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:29.416929   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417293   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.417318   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417500   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418019   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418159   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418230   65211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:29.418275   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.418330   65211 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:29.418344   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.420728   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421022   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421053   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421076   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421228   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421413   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421464   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421493   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421593   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.421673   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421749   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.421828   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421960   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.422081   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.502548   65211 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:29.531994   65211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:29.681482   65211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:29.689671   65211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:29.689735   65211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:29.711660   65211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:29.711682   65211 start.go:494] detecting cgroup driver to use...
	I0318 21:58:29.711750   65211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:29.728159   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:29.742409   65211 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:29.742450   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:29.757587   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:29.772218   65211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:29.883164   65211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:30.046773   65211 docker.go:233] disabling docker service ...
	I0318 21:58:30.046845   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:30.065878   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:30.081551   65211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:30.223188   65211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:30.353535   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:30.370291   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:30.391728   65211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:58:30.391789   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.409204   65211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:30.409281   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.426464   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.439964   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.452097   65211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:30.464410   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.475990   65211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.495092   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.506831   65211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:30.517410   65211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:30.517463   65211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:30.532465   65211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:30.543958   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:30.679788   65211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:30.839388   65211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:30.839466   65211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:30.844666   65211 start.go:562] Will wait 60s for crictl version
	I0318 21:58:30.844720   65211 ssh_runner.go:195] Run: which crictl
	I0318 21:58:30.848886   65211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:30.888598   65211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:30.888686   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.921097   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.954037   65211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:58:30.955378   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:30.958352   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.958792   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:30.958822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.959064   65211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:30.963556   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:30.977788   65211 kubeadm.go:877] updating cluster {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:30.977899   65211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:58:30.977949   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:31.018843   65211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:58:31.018926   65211 ssh_runner.go:195] Run: which lz4
	I0318 21:58:31.023589   65211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:31.028416   65211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:31.028445   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:32.944637   65211 crio.go:462] duration metric: took 1.921083293s to copy over tarball
	I0318 21:58:32.944709   65211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:35.696230   65211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751490576s)
	I0318 21:58:35.696261   65211 crio.go:469] duration metric: took 2.751600779s to extract the tarball
	I0318 21:58:35.696271   65211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:35.739467   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:35.794398   65211 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:58:35.794427   65211 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:58:35.794436   65211 kubeadm.go:928] updating node { 192.168.39.243 8443 v1.28.4 crio true true} ...
	I0318 21:58:35.794559   65211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-141758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:35.794625   65211 ssh_runner.go:195] Run: crio config
	I0318 21:58:35.844849   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:35.844877   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:35.844888   65211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:35.844923   65211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-141758 NodeName:embed-certs-141758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:58:35.845069   65211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-141758"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:35.845124   65211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:58:35.856885   65211 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:35.856950   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:35.867990   65211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 21:58:35.887057   65211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:35.909244   65211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 21:58:35.931267   65211 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:35.935793   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:35.950323   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:36.093377   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:36.112548   65211 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758 for IP: 192.168.39.243
	I0318 21:58:36.112575   65211 certs.go:194] generating shared ca certs ...
	I0318 21:58:36.112596   65211 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:36.112766   65211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:36.112813   65211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:36.112822   65211 certs.go:256] generating profile certs ...
	I0318 21:58:36.112943   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/client.key
	I0318 21:58:36.113043   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key.d575a4ae
	I0318 21:58:36.113097   65211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key
	I0318 21:58:36.113263   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:36.113307   65211 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:36.113322   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:36.113359   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:36.113396   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:36.113429   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:36.113536   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:36.114412   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:36.147930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:36.177554   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:36.208374   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:36.243425   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:58:36.276720   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:58:36.317930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:36.345717   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:58:36.371655   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:36.396998   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:36.422750   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:36.448117   65211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:36.466558   65211 ssh_runner.go:195] Run: openssl version
	I0318 21:58:36.472888   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:36.484389   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489534   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489585   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.496045   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:36.507723   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:36.519030   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524214   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524267   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.531109   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:36.543912   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:36.556130   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561330   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561369   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.567883   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:36.579819   65211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:36.824046   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:36.831273   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:36.838571   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:36.845621   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:36.852423   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:36.859433   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:36.866091   65211 kubeadm.go:391] StartCluster: {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:36.866212   65211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:36.866263   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:36.912390   65211 cri.go:89] found id: ""
	I0318 21:58:36.912460   65211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:36.929896   65211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:36.929923   65211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:36.929931   65211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:36.929985   65211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:36.947191   65211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:36.948613   65211 kubeconfig.go:125] found "embed-certs-141758" server: "https://192.168.39.243:8443"
	I0318 21:58:36.951641   65211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:36.966095   65211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.243
	I0318 21:58:36.966135   65211 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:36.966150   65211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:36.966216   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:37.022620   65211 cri.go:89] found id: ""
	I0318 21:58:37.022680   65211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:37.042338   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:37.054534   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:37.054552   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:37.054588   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:37.066099   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:37.066166   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:37.077340   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:37.088158   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:37.088214   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:37.099190   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.110081   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:37.110118   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.121852   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:37.133161   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:37.133215   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:37.144199   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:37.155593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.271593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.921199   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.175721   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.264478   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.377591   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:38.377683   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:38.878031   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.377859   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.417546   65211 api_server.go:72] duration metric: took 1.039957218s to wait for apiserver process to appear ...
	I0318 21:58:39.417576   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:58:39.417599   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:39.418125   65211 api_server.go:269] stopped: https://192.168.39.243:8443/healthz: Get "https://192.168.39.243:8443/healthz": dial tcp 192.168.39.243:8443: connect: connection refused
	I0318 21:58:39.917663   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.450620   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.450656   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.450668   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.489722   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.489755   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.918487   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.924551   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:42.924584   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.418077   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.424938   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:43.424969   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.918053   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.922905   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 21:58:43.931126   65211 api_server.go:141] control plane version: v1.28.4
	I0318 21:58:43.931151   65211 api_server.go:131] duration metric: took 4.513568499s to wait for apiserver health ...
	I0318 21:58:43.931159   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:43.931173   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:43.932876   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:43.934155   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:58:43.948750   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:58:43.978849   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:58:43.991046   65211 system_pods.go:59] 8 kube-system pods found
	I0318 21:58:43.991082   65211 system_pods.go:61] "coredns-5dd5756b68-r9pft" [add358cf-d544-4107-a05f-5e60542ea456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:58:43.991089   65211 system_pods.go:61] "etcd-embed-certs-141758" [31274121-ec65-46b5-bcda-65698c28bd1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:58:43.991095   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [61e4c0db-7a20-4c93-83b3-de4738e82614] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:58:43.991100   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [c2ffe900-4e3a-4c21-ae8f-cd42475207c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:58:43.991105   65211 system_pods.go:61] "kube-proxy-klmnb" [45b0c762-4eaf-4e8a-b321-0d474f61086e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:58:43.991109   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [5aeed9aa-9d98-49c0-bf8a-3998738f6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:58:43.991114   65211 system_pods.go:61] "metrics-server-57f55c9bc5-vt7hj" [949e4c0f-6a76-4141-b30c-f27291873f14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:58:43.991123   65211 system_pods.go:61] "storage-provisioner" [0aca1af6-3221-4698-915b-cabb9da662bf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:58:43.991128   65211 system_pods.go:74] duration metric: took 12.25858ms to wait for pod list to return data ...
	I0318 21:58:43.991136   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:58:43.996109   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:58:43.996135   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 21:58:43.996146   65211 node_conditions.go:105] duration metric: took 5.004614ms to run NodePressure ...
	I0318 21:58:43.996163   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:44.227606   65211 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234823   65211 kubeadm.go:733] kubelet initialised
	I0318 21:58:44.234846   65211 kubeadm.go:734] duration metric: took 7.215375ms waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234854   65211 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:58:44.241197   65211 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.248990   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249008   65211 pod_ready.go:81] duration metric: took 7.784519ms for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.249016   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249022   65211 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.254792   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254820   65211 pod_ready.go:81] duration metric: took 5.788084ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.254833   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254846   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.261248   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261272   65211 pod_ready.go:81] duration metric: took 6.415486ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.261282   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261291   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.383016   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383056   65211 pod_ready.go:81] duration metric: took 121.750871ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.383069   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383078   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784241   65211 pod_ready.go:92] pod "kube-proxy-klmnb" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:44.784264   65211 pod_ready.go:81] duration metric: took 401.177044ms for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784272   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:48.950018   65699 start.go:364] duration metric: took 4m12.246849763s to acquireMachinesLock for "no-preload-963041"
	I0318 21:58:48.950078   65699 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:48.950087   65699 fix.go:54] fixHost starting: 
	I0318 21:58:48.950522   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:48.950556   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:48.966094   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0318 21:58:48.966492   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:48.966970   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:58:48.966994   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:48.967295   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:48.967443   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:58:48.967548   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:58:48.968800   65699 fix.go:112] recreateIfNeeded on no-preload-963041: state=Stopped err=<nil>
	I0318 21:58:48.968835   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	W0318 21:58:48.969105   65699 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:48.970900   65699 out.go:177] * Restarting existing kvm2 VM for "no-preload-963041" ...
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:46.795337   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:49.295100   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.299437   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:48.972407   65699 main.go:141] libmachine: (no-preload-963041) Calling .Start
	I0318 21:58:48.972572   65699 main.go:141] libmachine: (no-preload-963041) Ensuring networks are active...
	I0318 21:58:48.973251   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network default is active
	I0318 21:58:48.973606   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network mk-no-preload-963041 is active
	I0318 21:58:48.973992   65699 main.go:141] libmachine: (no-preload-963041) Getting domain xml...
	I0318 21:58:48.974629   65699 main.go:141] libmachine: (no-preload-963041) Creating domain...
	I0318 21:58:50.190010   65699 main.go:141] libmachine: (no-preload-963041) Waiting to get IP...
	I0318 21:58:50.190750   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.191241   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.191320   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.191220   66466 retry.go:31] will retry after 238.162453ms: waiting for machine to come up
	I0318 21:58:50.430778   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.431262   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.431292   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.431191   66466 retry.go:31] will retry after 318.744541ms: waiting for machine to come up
	I0318 21:58:50.751612   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.752051   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.752086   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.752007   66466 retry.go:31] will retry after 464.29047ms: waiting for machine to come up
	I0318 21:58:51.218462   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.219034   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.219062   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.218983   66466 retry.go:31] will retry after 476.466311ms: waiting for machine to come up
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:53.792483   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.696634   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.697179   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.697208   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.697099   66466 retry.go:31] will retry after 520.896381ms: waiting for machine to come up
	I0318 21:58:52.219861   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:52.220480   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:52.220506   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:52.220414   66466 retry.go:31] will retry after 872.240898ms: waiting for machine to come up
	I0318 21:58:53.094123   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.094547   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.094580   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.094499   66466 retry.go:31] will retry after 757.325359ms: waiting for machine to come up
	I0318 21:58:53.852954   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.853422   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.853453   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.853358   66466 retry.go:31] will retry after 1.459327383s: waiting for machine to come up
	I0318 21:58:55.313969   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:55.314382   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:55.314413   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:55.314328   66466 retry.go:31] will retry after 1.373606235s: waiting for machine to come up
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:57.057776   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:57.791727   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:57.791754   65211 pod_ready.go:81] duration metric: took 13.007474768s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:57.791769   65211 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:59.800074   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:56.689643   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:56.690039   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:56.690064   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:56.690020   66466 retry.go:31] will retry after 1.905319343s: waiting for machine to come up
	I0318 21:58:58.597961   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:58.598470   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:58.598501   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:58.598420   66466 retry.go:31] will retry after 2.720364267s: waiting for machine to come up
	I0318 21:59:01.321901   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:01.322290   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:01.322312   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:01.322254   66466 retry.go:31] will retry after 2.73029124s: waiting for machine to come up
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.299143   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.800475   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.054294   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:04.054715   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:04.054752   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:04.054671   66466 retry.go:31] will retry after 3.148777081s: waiting for machine to come up
	I0318 21:59:08.706453   65170 start.go:364] duration metric: took 55.86344587s to acquireMachinesLock for "default-k8s-diff-port-660775"
	I0318 21:59:08.706504   65170 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:59:08.706515   65170 fix.go:54] fixHost starting: 
	I0318 21:59:08.706934   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:08.706970   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:08.723564   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0318 21:59:08.723935   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:08.724359   65170 main.go:141] libmachine: Using API Version  1
	I0318 21:59:08.724381   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:08.724671   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:08.724874   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:08.725045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:59:08.726635   65170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-660775: state=Stopped err=<nil>
	I0318 21:59:08.726656   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	W0318 21:59:08.726813   65170 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:59:08.728839   65170 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-660775" ...
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.730181   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Start
	I0318 21:59:08.730374   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring networks are active...
	I0318 21:59:08.731140   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network default is active
	I0318 21:59:08.731488   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network mk-default-k8s-diff-port-660775 is active
	I0318 21:59:08.731850   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Getting domain xml...
	I0318 21:59:08.732544   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Creating domain...
	I0318 21:59:10.014924   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting to get IP...
	I0318 21:59:10.015822   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016215   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016299   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.016206   66608 retry.go:31] will retry after 301.369371ms: waiting for machine to come up
	I0318 21:59:07.205807   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206239   65699 main.go:141] libmachine: (no-preload-963041) Found IP for machine: 192.168.72.84
	I0318 21:59:07.206266   65699 main.go:141] libmachine: (no-preload-963041) Reserving static IP address...
	I0318 21:59:07.206281   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has current primary IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206636   65699 main.go:141] libmachine: (no-preload-963041) Reserved static IP address: 192.168.72.84
	I0318 21:59:07.206659   65699 main.go:141] libmachine: (no-preload-963041) Waiting for SSH to be available...
	I0318 21:59:07.206686   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.206711   65699 main.go:141] libmachine: (no-preload-963041) DBG | skip adding static IP to network mk-no-preload-963041 - found existing host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"}
	I0318 21:59:07.206728   65699 main.go:141] libmachine: (no-preload-963041) DBG | Getting to WaitForSSH function...
	I0318 21:59:07.208790   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209157   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.209202   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209306   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH client type: external
	I0318 21:59:07.209331   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa (-rw-------)
	I0318 21:59:07.209367   65699 main.go:141] libmachine: (no-preload-963041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:07.209381   65699 main.go:141] libmachine: (no-preload-963041) DBG | About to run SSH command:
	I0318 21:59:07.209395   65699 main.go:141] libmachine: (no-preload-963041) DBG | exit 0
	I0318 21:59:07.337357   65699 main.go:141] libmachine: (no-preload-963041) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:07.337688   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetConfigRaw
	I0318 21:59:07.338258   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.340609   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.340957   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.340996   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.341213   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:59:07.341396   65699 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:07.341462   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:07.341668   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.343956   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344275   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.344311   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344395   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.344580   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344756   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344891   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.345086   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.345264   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.345276   65699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:07.457491   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:07.457543   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457778   65699 buildroot.go:166] provisioning hostname "no-preload-963041"
	I0318 21:59:07.457802   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457975   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.460729   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461120   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.461145   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461286   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.461480   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461643   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461797   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.461980   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.462179   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.462193   65699 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-963041 && echo "no-preload-963041" | sudo tee /etc/hostname
	I0318 21:59:07.592194   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-963041
	
	I0318 21:59:07.592219   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.594794   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595141   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.595177   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595305   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.595484   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595673   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595836   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.595987   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.596144   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.596160   65699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-963041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-963041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-963041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:07.719593   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:07.719622   65699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:07.719655   65699 buildroot.go:174] setting up certificates
	I0318 21:59:07.719667   65699 provision.go:84] configureAuth start
	I0318 21:59:07.719681   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.719928   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.722544   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.722907   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.722935   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.723095   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.725108   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725391   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.725420   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725522   65699 provision.go:143] copyHostCerts
	I0318 21:59:07.725582   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:07.725595   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:07.725665   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:07.725780   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:07.725792   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:07.725817   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:07.725874   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:07.725881   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:07.725898   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:07.725945   65699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.no-preload-963041 san=[127.0.0.1 192.168.72.84 localhost minikube no-preload-963041]
	I0318 21:59:07.893632   65699 provision.go:177] copyRemoteCerts
	I0318 21:59:07.893685   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:07.893711   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.896227   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896501   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.896527   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896692   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.896859   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.897035   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.897205   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:07.983501   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:08.014432   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:59:08.043755   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:59:08.074388   65699 provision.go:87] duration metric: took 354.707214ms to configureAuth
	I0318 21:59:08.074413   65699 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:08.074571   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:08.074638   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.077314   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077658   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.077690   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077837   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.077996   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078150   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078289   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.078435   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.078582   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.078596   65699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:08.446711   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:08.446745   65699 machine.go:97] duration metric: took 1.105332987s to provisionDockerMachine
	I0318 21:59:08.446757   65699 start.go:293] postStartSetup for "no-preload-963041" (driver="kvm2")
	I0318 21:59:08.446772   65699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:08.446787   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.447090   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:08.447118   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.449551   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.449917   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.449955   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.450117   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.450308   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.450471   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.450611   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.542283   65699 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:08.547389   65699 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:08.547423   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:08.547501   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:08.547606   65699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:08.547732   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:08.558721   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:08.586136   65699 start.go:296] duration metric: took 139.367706ms for postStartSetup
	I0318 21:59:08.586177   65699 fix.go:56] duration metric: took 19.636089577s for fixHost
	I0318 21:59:08.586201   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.588809   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589192   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.589219   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589435   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.589604   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589731   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589838   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.589972   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.590182   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.590197   65699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:08.706260   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799148.650279332
	
	I0318 21:59:08.706283   65699 fix.go:216] guest clock: 1710799148.650279332
	I0318 21:59:08.706293   65699 fix.go:229] Guest: 2024-03-18 21:59:08.650279332 +0000 UTC Remote: 2024-03-18 21:59:08.586181408 +0000 UTC m=+272.029432082 (delta=64.097924ms)
	I0318 21:59:08.706337   65699 fix.go:200] guest clock delta is within tolerance: 64.097924ms
	I0318 21:59:08.706350   65699 start.go:83] releasing machines lock for "no-preload-963041", held for 19.756290817s
	I0318 21:59:08.706384   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.706707   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:08.709113   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709389   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.709417   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709561   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710009   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710155   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710229   65699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:08.710278   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.710330   65699 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:08.710349   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.713131   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713154   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713464   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713492   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713521   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713536   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713632   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713739   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713824   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713987   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713988   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714117   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.714177   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714337   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.827151   65699 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:08.833847   65699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:08.985638   65699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:08.992294   65699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:08.992372   65699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:09.009419   65699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:09.009444   65699 start.go:494] detecting cgroup driver to use...
	I0318 21:59:09.009509   65699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:09.031942   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:09.051842   65699 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:09.051901   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:09.068136   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:09.084445   65699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:09.234323   65699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:09.402144   65699 docker.go:233] disabling docker service ...
	I0318 21:59:09.402210   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:09.419960   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:09.434836   65699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:09.572242   65699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:09.718817   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:09.734607   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:09.756470   65699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:09.756533   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.768595   65699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:09.768685   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.780726   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.800700   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.817396   65699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:09.829896   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.842211   65699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.867273   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.880909   65699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:09.893254   65699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:09.893297   65699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:09.910897   65699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:09.922400   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:10.065248   65699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:10.223498   65699 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:10.223577   65699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:10.230686   65699 start.go:562] Will wait 60s for crictl version
	I0318 21:59:10.230752   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.235527   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:10.278655   65699 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:10.278756   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.310992   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.344925   65699 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:59:07.298973   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:09.799803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:10.346255   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:10.349081   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349418   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:10.349437   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349657   65699 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:10.354793   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:10.369744   65699 kubeadm.go:877] updating cluster {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:10.369893   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:59:10.369951   65699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:10.409975   65699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 21:59:10.410001   65699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:59:10.410062   65699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.410074   65699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.410086   65699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.410122   65699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.410148   65699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.410166   65699 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 21:59:10.410213   65699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.410223   65699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411690   65699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.411695   65699 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 21:59:10.411730   65699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.411747   65699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.411764   65699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.411793   65699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.553195   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 21:59:10.553249   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.555774   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.559123   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.562266   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.571390   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.592690   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.702213   65699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 21:59:10.702265   65699 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.702314   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857028   65699 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 21:59:10.857072   65699 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.857087   65699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 21:59:10.857117   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857146   65699 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.857154   65699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 21:59:10.857180   65699 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.857197   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857214   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857211   65699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 21:59:10.857250   65699 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.857254   65699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 21:59:10.857264   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.857275   65699 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.857282   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857305   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.872164   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.872195   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.872268   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.927043   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.927147   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.927095   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.927219   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.972625   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 21:59:10.972740   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:11.016239   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 21:59:11.016291   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.016356   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:11.016380   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.047703   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 21:59:11.047732   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047784   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047849   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.047952   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.069007   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 21:59:11.069064   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 21:59:11.069095   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 21:59:11.069126   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 21:59:11.069139   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.319858   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320279   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320310   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.320224   66608 retry.go:31] will retry after 253.332307ms: waiting for machine to come up
	I0318 21:59:10.575748   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576242   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576271   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.576194   66608 retry.go:31] will retry after 484.439329ms: waiting for machine to come up
	I0318 21:59:11.061837   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062291   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.062247   66608 retry.go:31] will retry after 520.757249ms: waiting for machine to come up
	I0318 21:59:11.585112   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585541   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585571   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.585485   66608 retry.go:31] will retry after 482.335377ms: waiting for machine to come up
	I0318 21:59:12.068813   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069420   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069456   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:12.069374   66608 retry.go:31] will retry after 936.563875ms: waiting for machine to come up
	I0318 21:59:13.007582   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.007986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.008012   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.007945   66608 retry.go:31] will retry after 864.468016ms: waiting for machine to come up
	I0318 21:59:13.874400   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874910   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874942   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.874875   66608 retry.go:31] will retry after 1.239808671s: waiting for machine to come up
	I0318 21:59:15.116440   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116834   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116855   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:15.116784   66608 retry.go:31] will retry after 1.208141339s: waiting for machine to come up
	I0318 21:59:11.804059   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:14.301199   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:16.301517   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:11.928081   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.330891   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.28291236s)
	I0318 21:59:14.330933   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330948   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.261785854s)
	I0318 21:59:14.330971   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330974   65699 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.402863992s)
	I0318 21:59:14.330979   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.283167958s)
	I0318 21:59:14.330996   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 21:59:14.331011   65699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 21:59:14.331019   65699 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331043   65699 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.331064   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331086   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:14.336430   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327415   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:16.327350   66608 retry.go:31] will retry after 2.24875206s: waiting for machine to come up
	I0318 21:59:18.578068   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578644   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:18.578589   66608 retry.go:31] will retry after 2.267791851s: waiting for machine to come up
	I0318 21:59:18.800406   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:20.800524   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:18.591731   65699 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.255273393s)
	I0318 21:59:18.591789   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 21:59:18.591897   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:18.591937   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.260848845s)
	I0318 21:59:18.591958   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 21:59:18.591986   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:18.592046   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:19.859577   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.267508443s)
	I0318 21:59:19.859608   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 21:59:19.859637   65699 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:19.859641   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.267714811s)
	I0318 21:59:19.859674   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 21:59:19.859685   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.847586   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848099   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848135   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:20.848048   66608 retry.go:31] will retry after 2.918466892s: waiting for machine to come up
	I0318 21:59:23.768491   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.768999   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.769030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:23.768962   66608 retry.go:31] will retry after 4.373256501s: waiting for machine to come up
	I0318 21:59:22.800765   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:24.801392   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:21.944666   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.084944906s)
	I0318 21:59:21.944700   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 21:59:21.944720   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:21.944766   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:24.714752   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.769964684s)
	I0318 21:59:24.714793   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 21:59:24.714827   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:24.714884   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.146019   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146507   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Found IP for machine: 192.168.50.150
	I0318 21:59:28.146533   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserving static IP address...
	I0318 21:59:28.146549   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has current primary IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146939   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.146966   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserved static IP address: 192.168.50.150
	I0318 21:59:28.146986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | skip adding static IP to network mk-default-k8s-diff-port-660775 - found existing host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"}
	I0318 21:59:28.147006   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Getting to WaitForSSH function...
	I0318 21:59:28.147030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for SSH to be available...
	I0318 21:59:28.149408   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149771   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.149799   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149929   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH client type: external
	I0318 21:59:28.149978   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa (-rw-------)
	I0318 21:59:28.150020   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:28.150039   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | About to run SSH command:
	I0318 21:59:28.150050   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | exit 0
	I0318 21:59:28.273437   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:28.273768   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetConfigRaw
	I0318 21:59:28.274402   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.277330   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.277757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277997   65170 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:59:28.278217   65170 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:28.278240   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:28.278435   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.280754   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281149   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.281178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281318   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.281495   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281646   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281796   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.281955   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.282163   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.282185   65170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:28.390614   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:28.390642   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.390896   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:59:28.390923   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.391095   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.394421   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.394838   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.394876   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.395178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.395410   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395775   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.395953   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.396145   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.396160   65170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-660775 && echo "default-k8s-diff-port-660775" | sudo tee /etc/hostname
	I0318 21:59:28.522303   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-660775
	
	I0318 21:59:28.522347   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.525224   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.525667   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525789   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.525961   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526122   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.526471   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.526651   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.526676   65170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-660775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-660775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-660775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:28.641488   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:28.641521   65170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:28.641547   65170 buildroot.go:174] setting up certificates
	I0318 21:59:28.641555   65170 provision.go:84] configureAuth start
	I0318 21:59:28.641564   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.641871   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.644934   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.645301   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645425   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.647753   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648089   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.648119   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648360   65170 provision.go:143] copyHostCerts
	I0318 21:59:28.648423   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:28.648435   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:28.648507   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:28.648620   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:28.648631   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:28.648660   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:28.648731   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:28.648740   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:28.648769   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:28.648829   65170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-660775 san=[127.0.0.1 192.168.50.150 default-k8s-diff-port-660775 localhost minikube]
	I0318 21:59:28.697191   65170 provision.go:177] copyRemoteCerts
	I0318 21:59:28.697253   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:28.697274   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.699919   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700237   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.700269   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700477   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.700694   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.700882   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.701060   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:28.793840   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:28.829285   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 21:59:28.857628   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:59:28.886344   65170 provision.go:87] duration metric: took 244.778215ms to configureAuth
	I0318 21:59:28.886366   65170 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:28.886527   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:59:28.886593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.889885   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890321   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.890351   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890534   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.890721   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.890879   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.891013   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.891190   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.891366   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.891399   65170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:29.189002   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:29.189033   65170 machine.go:97] duration metric: took 910.801375ms to provisionDockerMachine
	I0318 21:59:29.189046   65170 start.go:293] postStartSetup for "default-k8s-diff-port-660775" (driver="kvm2")
	I0318 21:59:29.189058   65170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:29.189083   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.189409   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:29.189438   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.192164   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.192512   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.192866   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.193045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.193190   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.277850   65170 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:29.282886   65170 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:29.282909   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:29.282975   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:29.283065   65170 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:29.283172   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:29.296052   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:29.323906   65170 start.go:296] duration metric: took 134.847993ms for postStartSetup
	I0318 21:59:29.323945   65170 fix.go:56] duration metric: took 20.61742941s for fixHost
	I0318 21:59:29.323969   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.326616   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.326920   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.327063   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.327300   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327472   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327622   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.327853   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:29.328058   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:29.328070   65170 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:29.430348   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799169.377980776
	
	I0318 21:59:29.430377   65170 fix.go:216] guest clock: 1710799169.377980776
	I0318 21:59:29.430386   65170 fix.go:229] Guest: 2024-03-18 21:59:29.377980776 +0000 UTC Remote: 2024-03-18 21:59:29.323950953 +0000 UTC m=+359.071824665 (delta=54.029823ms)
	I0318 21:59:29.430411   65170 fix.go:200] guest clock delta is within tolerance: 54.029823ms
	I0318 21:59:29.430420   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 20.723939352s
	I0318 21:59:29.430450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.430727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:29.433339   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433686   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.433713   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433865   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434308   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434531   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434632   65170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:29.434682   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.434783   65170 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:29.434811   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.437380   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437479   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437731   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437760   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437829   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437880   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.438033   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438170   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438244   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438332   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438393   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438603   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.438694   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.540670   65170 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:29.547318   65170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:29.704221   65170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:29.710762   65170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:29.710832   65170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:29.727820   65170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:29.727838   65170 start.go:494] detecting cgroup driver to use...
	I0318 21:59:29.727905   65170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:29.745750   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:29.760984   65170 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:29.761024   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:29.776639   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:29.791749   65170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:29.914380   65170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:30.096200   65170 docker.go:233] disabling docker service ...
	I0318 21:59:30.096281   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:30.112512   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:30.126090   65170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:30.258617   65170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:30.397700   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:30.420478   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:30.443197   65170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:30.443282   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.455577   65170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:30.455630   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.467898   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.480041   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.492501   65170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:30.505178   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.517657   65170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.537376   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.554749   65170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:30.570281   65170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:30.570352   65170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:30.587991   65170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:30.600354   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:30.744678   65170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:30.902192   65170 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:30.902279   65170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:30.907869   65170 start.go:562] Will wait 60s for crictl version
	I0318 21:59:30.907937   65170 ssh_runner.go:195] Run: which crictl
	I0318 21:59:30.913588   65170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:30.957344   65170 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:30.957431   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:30.991141   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:31.024452   65170 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:59:27.301221   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:29.799576   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:26.781379   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.066468133s)
	I0318 21:59:26.781415   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 21:59:26.781445   65699 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:26.781493   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:27.747707   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 21:59:27.747764   65699 cache_images.go:123] Successfully loaded all cached images
	I0318 21:59:27.747769   65699 cache_images.go:92] duration metric: took 17.337757279s to LoadCachedImages
	I0318 21:59:27.747781   65699 kubeadm.go:928] updating node { 192.168.72.84 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:59:27.747907   65699 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-963041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:27.747986   65699 ssh_runner.go:195] Run: crio config
	I0318 21:59:27.810020   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:27.810048   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:27.810060   65699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:27.810078   65699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963041 NodeName:no-preload-963041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:27.810242   65699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963041"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:27.810327   65699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:59:27.823120   65699 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:27.823172   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:27.834742   65699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:59:27.854365   65699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:59:27.872873   65699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 21:59:27.891245   65699 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:27.895305   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:27.907928   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:28.044997   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:28.064471   65699 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041 for IP: 192.168.72.84
	I0318 21:59:28.064489   65699 certs.go:194] generating shared ca certs ...
	I0318 21:59:28.064503   65699 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:28.064668   65699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:28.064733   65699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:28.064747   65699 certs.go:256] generating profile certs ...
	I0318 21:59:28.064847   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/client.key
	I0318 21:59:28.064927   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key.53f57e82
	I0318 21:59:28.064975   65699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key
	I0318 21:59:28.065090   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:28.065140   65699 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:28.065154   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:28.065190   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:28.065218   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:28.065244   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:28.065292   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:28.066189   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:28.108239   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:28.147385   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:28.191255   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:28.231079   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:59:28.269730   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:59:28.302326   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:28.331762   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:28.359487   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:28.390196   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:28.422323   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:28.452212   65699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:28.476910   65699 ssh_runner.go:195] Run: openssl version
	I0318 21:59:28.483480   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:28.495230   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500728   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500771   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.507487   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:28.520368   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:28.533700   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540767   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540817   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.549380   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:28.566307   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:28.582377   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589139   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589192   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.597396   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:28.610189   65699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:28.616488   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:28.625547   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:28.634680   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:28.643077   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:28.652470   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:28.660641   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:28.669216   65699 kubeadm.go:391] StartCluster: {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:28.669342   65699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:28.669444   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.719357   65699 cri.go:89] found id: ""
	I0318 21:59:28.719427   65699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:28.733158   65699 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:28.733179   65699 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:28.733186   65699 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:28.733234   65699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:28.744804   65699 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:28.745805   65699 kubeconfig.go:125] found "no-preload-963041" server: "https://192.168.72.84:8443"
	I0318 21:59:28.747888   65699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:28.757871   65699 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.84
	I0318 21:59:28.757896   65699 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:28.757918   65699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:28.757964   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.805988   65699 cri.go:89] found id: ""
	I0318 21:59:28.806057   65699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:28.829257   65699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:28.841515   65699 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:28.841543   65699 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:28.841594   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:59:28.853433   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:28.853499   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:28.864593   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:59:28.875236   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:28.875285   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:28.887756   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.898219   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:28.898271   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.909308   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:59:28.919480   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:28.919540   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:28.930305   65699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:28.941125   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:29.056129   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.261585   65699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.205423679s)
	I0318 21:59:30.261614   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.498583   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.589160   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.713046   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:30.713150   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.214160   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.025614   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:31.028381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028758   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:31.028783   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028960   65170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:31.033836   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:31.048652   65170 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:31.048798   65170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:59:31.048853   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:31.089246   65170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:59:31.089322   65170 ssh_runner.go:195] Run: which lz4
	I0318 21:59:31.094026   65170 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:59:31.098900   65170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:59:31.098929   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:59:33.166556   65170 crio.go:462] duration metric: took 2.072562246s to copy over tarball
	I0318 21:59:33.166639   65170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:59:31.810567   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:34.301018   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:36.346463   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:31.714009   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.762157   65699 api_server.go:72] duration metric: took 1.049110677s to wait for apiserver process to appear ...
	I0318 21:59:31.762188   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:31.762210   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:31.762737   65699 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	I0318 21:59:32.263205   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.738750   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.738785   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.738802   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.804061   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.804102   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.804116   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.842097   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.842144   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:35.262351   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.267395   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.267439   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:35.763016   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.775072   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.775109   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.262338   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:36.267165   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:36.267207   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.762879   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.074225   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:37.074263   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:37.262637   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.267514   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 21:59:37.275551   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 21:59:37.275579   65699 api_server.go:131] duration metric: took 5.513383348s to wait for apiserver health ...
	I0318 21:59:37.275590   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:37.275598   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:37.496330   65699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:37.641915   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:37.659277   65699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:37.684019   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:38.075296   65699 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:38.075333   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:38.075353   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:38.075367   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:38.075375   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:38.075388   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:38.075407   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:38.075418   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:38.075429   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:38.075440   65699 system_pods.go:74] duration metric: took 391.399859ms to wait for pod list to return data ...
	I0318 21:59:38.075452   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:38.252627   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:38.252659   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:38.252670   65699 node_conditions.go:105] duration metric: took 177.209294ms to run NodePressure ...
	I0318 21:59:38.252692   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.662257   65699 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670807   65699 kubeadm.go:733] kubelet initialised
	I0318 21:59:38.670836   65699 kubeadm.go:734] duration metric: took 8.550399ms waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670846   65699 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:38.680740   65699 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.689134   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689157   65699 pod_ready.go:81] duration metric: took 8.393104ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.689169   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689178   65699 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.693796   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693815   65699 pod_ready.go:81] duration metric: took 4.628403ms for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.693824   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693829   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.701225   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701245   65699 pod_ready.go:81] duration metric: took 7.410052ms for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.701254   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701262   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.707848   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707871   65699 pod_ready.go:81] duration metric: took 6.598987ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.707882   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707889   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.066641   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066668   65699 pod_ready.go:81] duration metric: took 358.769058ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.066679   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066687   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.466406   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466440   65699 pod_ready.go:81] duration metric: took 399.746217ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.466449   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466455   65699 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.866206   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866232   65699 pod_ready.go:81] duration metric: took 399.76891ms for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.866240   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866247   65699 pod_ready.go:38] duration metric: took 1.195391629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:39.866263   65699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:59:39.879772   65699 ops.go:34] apiserver oom_adj: -16
	I0318 21:59:39.879796   65699 kubeadm.go:591] duration metric: took 11.146603139s to restartPrimaryControlPlane
	I0318 21:59:39.879807   65699 kubeadm.go:393] duration metric: took 11.21059758s to StartCluster
	I0318 21:59:39.879825   65699 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.879915   65699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:59:39.881739   65699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.881970   65699 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:59:39.883934   65699 out.go:177] * Verifying Kubernetes components...
	I0318 21:59:39.882064   65699 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:59:39.882254   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:39.885913   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:39.885924   65699 addons.go:69] Setting metrics-server=true in profile "no-preload-963041"
	I0318 21:59:39.885932   65699 addons.go:69] Setting default-storageclass=true in profile "no-preload-963041"
	I0318 21:59:39.885950   65699 addons.go:234] Setting addon metrics-server=true in "no-preload-963041"
	W0318 21:59:39.885958   65699 addons.go:243] addon metrics-server should already be in state true
	I0318 21:59:39.885966   65699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963041"
	I0318 21:59:39.885918   65699 addons.go:69] Setting storage-provisioner=true in profile "no-preload-963041"
	I0318 21:59:39.885985   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886000   65699 addons.go:234] Setting addon storage-provisioner=true in "no-preload-963041"
	W0318 21:59:39.886052   65699 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:59:39.886075   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886384   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886403   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886437   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886392   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886448   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886438   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.902103   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0318 21:59:39.902574   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.903192   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.903211   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.903568   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.904113   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.904142   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.908122   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 21:59:39.908269   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0318 21:59:39.908566   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.908639   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.909237   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.909251   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.909662   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.909834   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.913534   65699 addons.go:234] Setting addon default-storageclass=true in "no-preload-963041"
	W0318 21:59:39.913558   65699 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:59:39.913586   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.913959   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.913992   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.921260   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.921284   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.921661   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.922725   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.922778   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.925575   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0318 21:59:39.926170   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.926799   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.926819   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.933014   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0318 21:59:39.933066   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.934464   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.934527   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.935441   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.935456   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.936236   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.936821   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.936870   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.936983   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.938986   65699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:39.940103   65699 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:39.940115   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:59:39.940128   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.942712   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943138   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.943168   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943415   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.943574   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.943690   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.943828   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.944813   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0318 21:59:39.961605   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.962117   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.962140   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.962564   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.962745   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.964606   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.970697   65699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.222479   65170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055805805s)
	I0318 21:59:36.222507   65170 crio.go:469] duration metric: took 3.055923767s to extract the tarball
	I0318 21:59:36.222515   65170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:59:36.265990   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:36.314679   65170 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:59:36.314704   65170 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:59:36.314714   65170 kubeadm.go:928] updating node { 192.168.50.150 8444 v1.28.4 crio true true} ...
	I0318 21:59:36.314828   65170 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-660775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:36.314900   65170 ssh_runner.go:195] Run: crio config
	I0318 21:59:36.375889   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:36.375908   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:36.375916   65170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:36.375935   65170 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-660775 NodeName:default-k8s-diff-port-660775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:36.376058   65170 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-660775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:36.376117   65170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:59:36.387851   65170 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:36.387905   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:36.398095   65170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0318 21:59:36.416507   65170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:59:36.437165   65170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0318 21:59:36.458125   65170 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:36.462688   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:36.476913   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:36.629523   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:36.648679   65170 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775 for IP: 192.168.50.150
	I0318 21:59:36.648697   65170 certs.go:194] generating shared ca certs ...
	I0318 21:59:36.648717   65170 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:36.648870   65170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:36.648942   65170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:36.648956   65170 certs.go:256] generating profile certs ...
	I0318 21:59:36.649061   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/client.key
	I0318 21:59:36.649136   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key.6eb93750
	I0318 21:59:36.649181   65170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key
	I0318 21:59:36.649342   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:36.649408   65170 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:36.649427   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:36.649465   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:36.649502   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:36.649524   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:36.649563   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:36.650116   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:36.709130   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:36.777530   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:36.822349   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:36.861155   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 21:59:36.899264   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:59:36.930697   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:36.960715   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:36.992062   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:37.020001   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:37.051443   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:37.080115   65170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:37.102221   65170 ssh_runner.go:195] Run: openssl version
	I0318 21:59:37.111020   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:37.127447   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132675   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132730   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.139092   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:37.151349   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:37.166470   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172601   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172656   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.179404   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:37.192628   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:37.206758   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211839   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211882   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.218285   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:37.230291   65170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:37.235312   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:37.242399   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:37.249658   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:37.256458   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:37.263110   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:37.270329   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:37.277040   65170 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:37.277140   65170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:37.277176   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.320525   65170 cri.go:89] found id: ""
	I0318 21:59:37.320595   65170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:37.332584   65170 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:37.332602   65170 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:37.332608   65170 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:37.332678   65170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:37.348017   65170 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:37.349557   65170 kubeconfig.go:125] found "default-k8s-diff-port-660775" server: "https://192.168.50.150:8444"
	I0318 21:59:37.352826   65170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:37.367223   65170 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.150
	I0318 21:59:37.367256   65170 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:37.367267   65170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:37.367315   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.411319   65170 cri.go:89] found id: ""
	I0318 21:59:37.411401   65170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:37.431545   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:37.442587   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:37.442610   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:37.442661   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 21:59:37.452384   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:37.452439   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:37.462519   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 21:59:37.472669   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:37.472728   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:37.483107   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.493177   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:37.493224   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.503546   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 21:59:37.513471   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:37.513512   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:37.524147   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:37.534940   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:37.665308   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.882330   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216992532s)
	I0318 21:59:38.882356   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.110948   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.217267   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.332300   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:39.332389   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.833190   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.972027   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 21:59:39.972078   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 21:59:39.972109   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.975122   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975608   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.975627   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975994   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.976196   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.976371   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.976663   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.982859   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0318 21:59:39.983263   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.983860   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.983904   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.984308   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.984558   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.986338   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.986645   65699 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:39.986690   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:59:39.986718   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.989398   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989741   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.989999   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989951   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.990229   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.990392   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.990517   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:40.115233   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:40.136271   65699 node_ready.go:35] waiting up to 6m0s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:40.232668   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:40.234394   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 21:59:40.234417   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 21:59:40.256237   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:40.301845   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 21:59:40.301873   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 21:59:40.354405   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:40.354435   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 21:59:40.377996   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:41.389416   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.156705132s)
	I0318 21:59:41.389429   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133120616s)
	I0318 21:59:41.389470   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389475   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389482   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389486   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389763   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389783   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389792   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389799   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389828   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.389874   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389890   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389899   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389938   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.390199   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390398   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.390339   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390375   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390451   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390470   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.397714   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.397736   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.397951   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.397999   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.398017   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.415620   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037584799s)
	I0318 21:59:41.415673   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.415684   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.415964   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.415992   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.416007   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416016   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.416027   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.416207   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.416220   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416229   65699 addons.go:470] Verifying addon metrics-server=true in "no-preload-963041"
	I0318 21:59:41.418761   65699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 21:59:38.798943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:40.800913   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:41.420038   65699 addons.go:505] duration metric: took 1.537986468s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 21:59:40.332810   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.411342   65170 api_server.go:72] duration metric: took 1.079036948s to wait for apiserver process to appear ...
	I0318 21:59:40.411371   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:40.411394   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:40.411932   65170 api_server.go:269] stopped: https://192.168.50.150:8444/healthz: Get "https://192.168.50.150:8444/healthz": dial tcp 192.168.50.150:8444: connect: connection refused
	I0318 21:59:40.911545   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.377410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.377443   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.377471   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.426410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.426468   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.426485   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.448464   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.448523   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:43.912498   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.918271   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.918309   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.411824   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.422200   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:44.422223   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.911509   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.916884   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 21:59:44.928835   65170 api_server.go:141] control plane version: v1.28.4
	I0318 21:59:44.928862   65170 api_server.go:131] duration metric: took 4.517483413s to wait for apiserver health ...
	I0318 21:59:44.928872   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:44.928881   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:44.930794   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.932164   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:44.959217   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:45.002449   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:45.017348   65170 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:45.017394   65170 system_pods.go:61] "coredns-5dd5756b68-cjq2v" [9ae899ef-63e4-407d-9013-71552ec87614] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:45.017407   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [286b98ba-bc9e-4e2f-984c-d7b2447aef15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:45.017417   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [7a0db461-f8d5-4331-993e-d7b9345159e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:45.017428   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [e4f5859a-dfcc-41d8-9a17-acb601449821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:45.017443   65170 system_pods.go:61] "kube-proxy-qt2m6" [c3c7c6db-4935-4079-b0e7-60ba2cd886b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:45.017450   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [7115eef0-5ff4-4dfe-9135-88ad8f698e43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:45.017461   65170 system_pods.go:61] "metrics-server-57f55c9bc5-5dtf5" [b19191ee-e2db-4392-82e2-1a95fae76101] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:45.017489   65170 system_pods.go:61] "storage-provisioner" [045d4b30-47a3-4c80-a9e8-c36ef7395e6c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:45.017498   65170 system_pods.go:74] duration metric: took 15.027239ms to wait for pod list to return data ...
	I0318 21:59:45.017511   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:45.020962   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:45.020982   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:45.020991   65170 node_conditions.go:105] duration metric: took 3.47292ms to run NodePressure ...
	I0318 21:59:45.021007   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:45.277662   65170 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282939   65170 kubeadm.go:733] kubelet initialised
	I0318 21:59:45.282958   65170 kubeadm.go:734] duration metric: took 5.277143ms waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282965   65170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.289546   65170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:43.299509   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:45.300875   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:42.142145   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:44.641863   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:45.640660   65699 node_ready.go:49] node "no-preload-963041" has status "Ready":"True"
	I0318 21:59:45.640686   65699 node_ready.go:38] duration metric: took 5.50437071s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:45.640697   65699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.647087   65699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652062   65699 pod_ready.go:92] pod "coredns-76f75df574-6mtzp" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.652081   65699 pod_ready.go:81] duration metric: took 4.969873ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652091   65699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.296790   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298834   65170 pod_ready.go:81] duration metric: took 9.259848ms for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.298849   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298868   65170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.307325   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307367   65170 pod_ready.go:81] duration metric: took 8.486967ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.307380   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307389   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.319473   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319498   65170 pod_ready.go:81] duration metric: took 12.100242ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.319514   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319522   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.407356   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407379   65170 pod_ready.go:81] duration metric: took 87.846686ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.407390   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407395   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806835   65170 pod_ready.go:92] pod "kube-proxy-qt2m6" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.806866   65170 pod_ready.go:81] duration metric: took 399.462221ms for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806878   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:47.814286   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:47.799616   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:50.300118   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:46.659819   65699 pod_ready.go:92] pod "etcd-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:46.659855   65699 pod_ready.go:81] duration metric: took 1.007755238s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:46.659868   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:48.669033   65699 pod_ready.go:102] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:51.168202   65699 pod_ready.go:92] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.168229   65699 pod_ready.go:81] duration metric: took 4.508354098s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.168240   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174243   65699 pod_ready.go:92] pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.174268   65699 pod_ready.go:81] duration metric: took 6.018685ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174280   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179279   65699 pod_ready.go:92] pod "kube-proxy-kkrzx" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.179300   65699 pod_ready.go:81] duration metric: took 5.012711ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179311   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185651   65699 pod_ready.go:92] pod "kube-scheduler-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.185670   65699 pod_ready.go:81] duration metric: took 6.351567ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185678   65699 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.315135   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.814432   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.798645   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:54.800561   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:53.191834   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.192346   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 21:59:55.313448   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.813863   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:56.813885   65170 pod_ready.go:81] duration metric: took 11.006997984s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:56.813893   65170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:58.820535   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.802709   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:59.299235   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:01.299761   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:57.694309   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:00.192594   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:00.821070   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.821300   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:03.301922   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.799844   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.693235   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:04.693324   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:05.322655   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.820557   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.821227   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.799881   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.802803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:06.694723   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:11.821593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:13.821792   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.299680   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.300073   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:11.693179   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.194532   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.822593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.321691   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.798687   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.802403   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.302525   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.692709   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.692875   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:20.693620   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:20.820297   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:22.821250   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:24.825337   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.800104   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:26.299105   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.193665   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.194188   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:27.321417   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:29.326924   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:28.798399   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.800456   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:27.692517   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.194633   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:31.821301   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:34.322210   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:33.299227   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.800314   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:32.693924   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.191865   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:36.324995   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.820800   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.298887   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.299570   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.193636   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:39.693273   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:40.821224   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:43.319945   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.300484   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.300924   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.302264   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.192508   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.192743   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:45.321277   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:47.821205   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.822803   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:48.799135   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.801798   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.692554   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.193102   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:51.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:52.320478   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:54.321815   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.301031   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:55.799954   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.693639   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.193657   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.820977   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.822348   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:57.804405   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.299016   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.692166   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.692526   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:01.319955   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:03.324127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.299297   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:04.299721   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.694116   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.193971   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.821257   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.320779   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:06.799599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.800534   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.301906   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:07.195710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:09.691770   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:10.321379   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:12.321708   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.324578   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:13.303344   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:15.799107   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.692795   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.192711   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:16.192974   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:16.820809   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.820856   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:17.800874   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.301479   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.691838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.692582   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:20.821237   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.320180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:22.302559   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:24.800442   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.193491   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:25.692671   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:25.322575   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.323097   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.821127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.302001   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.799369   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.693253   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.192347   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:32.324221   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.821547   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.301599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.798017   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.692835   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.693519   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:37.323580   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.325038   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.798108   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:38.799072   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:40.799797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.694495   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.192489   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:41.193100   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:41.821043   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:44.322040   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:42.802267   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.301098   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:43.692766   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.693595   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.322167   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.322495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:47.800006   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.298196   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.193259   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.195154   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:50.821598   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:53.321546   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.302659   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:54.799292   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.692794   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.192968   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.821471   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.822495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.299408   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.299964   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.193704   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.194959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:00.321126   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:02.321899   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.798818   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:03.800605   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:05.801459   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.693520   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.192449   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:06.192843   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:06.821775   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:09.322004   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.299572   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:10.799620   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.693106   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.192341   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.821139   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.821610   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.299590   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.303956   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.692089   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.693750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:16.320827   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:18.321654   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.800563   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.300888   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.694101   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.191376   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:20.322586   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.820488   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.820555   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.798797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.799501   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.192760   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.193279   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:26.821222   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.321682   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:27.302631   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.799175   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.693239   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.192207   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.193072   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:31.820868   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.822075   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.802719   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:34.301730   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.692959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.194871   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.320427   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.321178   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.799943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:39.300691   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.699873   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.193658   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:40.321388   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:42.822538   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.799159   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.800027   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.299156   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.693317   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.194419   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:45.322637   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:47.820481   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.300259   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.798429   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.693209   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.693611   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:50.322017   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.820364   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:54.820692   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.799942   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.301665   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.694058   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.194171   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:56.821255   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:58.828524   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.303516   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.791851   65211 pod_ready.go:81] duration metric: took 4m0.000068811s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	E0318 22:02:57.791889   65211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:02:57.791913   65211 pod_ready.go:38] duration metric: took 4m13.55705031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:02:57.791938   65211 kubeadm.go:591] duration metric: took 4m20.862001116s to restartPrimaryControlPlane
	W0318 22:02:57.792000   65211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:02:57.792027   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:02:57.692975   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:59.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:01.321045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:03.322008   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.694585   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:04.195750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:05.322087   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:07.820940   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:09.822652   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:06.693803   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:08.693856   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:10.694375   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:12.321504   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:14.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:13.192173   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:15.193816   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:16.822327   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.322059   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:17.691761   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.691867   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.322674   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.823374   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.692710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.695045   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.192838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.322370   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:28.820807   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.165008   65211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.372946393s)
	I0318 22:03:30.165087   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:30.184259   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:30.198417   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:30.210595   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:30.210624   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:30.210675   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:30.222159   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:30.222210   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:30.234099   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:30.244546   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:30.244621   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:30.255192   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.265777   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:30.265833   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.276674   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:30.286349   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:30.286402   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:30.296530   65211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:30.522414   65211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:03:28.193120   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.194300   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:31.321986   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:33.823045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:32.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:34.693824   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.294937   65211 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:03:39.295015   65211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:39.295142   65211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:39.295296   65211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:39.295451   65211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:39.295550   65211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:39.297047   65211 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:39.297135   65211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:39.297250   65211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:39.297368   65211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:39.297461   65211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:39.297557   65211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:39.297640   65211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:39.297742   65211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:39.297831   65211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:39.297939   65211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:39.298032   65211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:39.298084   65211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:39.298206   65211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:39.298301   65211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:39.298376   65211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:39.298451   65211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:39.298518   65211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:39.298612   65211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:39.298693   65211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:39.299829   65211 out.go:204]   - Booting up control plane ...
	I0318 22:03:39.299959   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:39.300052   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:39.300150   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:39.300308   65211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:39.300444   65211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:39.300496   65211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:39.300713   65211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:39.300829   65211 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003359 seconds
	I0318 22:03:39.300997   65211 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:03:39.301155   65211 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:03:39.301228   65211 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:03:39.301451   65211 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-141758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:03:39.301526   65211 kubeadm.go:309] [bootstrap-token] Using token: p114v6.erax4pf5xkn6x2it
	I0318 22:03:39.302903   65211 out.go:204]   - Configuring RBAC rules ...
	I0318 22:03:39.303025   65211 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:03:39.303133   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:03:39.303301   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:03:39.303479   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:03:39.303574   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:03:39.303651   65211 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:03:39.303810   65211 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:03:39.303886   65211 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:03:39.303960   65211 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:03:39.303972   65211 kubeadm.go:309] 
	I0318 22:03:39.304041   65211 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:03:39.304050   65211 kubeadm.go:309] 
	I0318 22:03:39.304158   65211 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:03:39.304173   65211 kubeadm.go:309] 
	I0318 22:03:39.304208   65211 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:03:39.304292   65211 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:03:39.304368   65211 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:03:39.304377   65211 kubeadm.go:309] 
	I0318 22:03:39.304456   65211 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:03:39.304465   65211 kubeadm.go:309] 
	I0318 22:03:39.304547   65211 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:03:39.304570   65211 kubeadm.go:309] 
	I0318 22:03:39.304649   65211 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:03:39.304754   65211 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:03:39.304861   65211 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:03:39.304878   65211 kubeadm.go:309] 
	I0318 22:03:39.305028   65211 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:03:39.305134   65211 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:03:39.305144   65211 kubeadm.go:309] 
	I0318 22:03:39.305248   65211 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305390   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:03:39.305422   65211 kubeadm.go:309] 	--control-plane 
	I0318 22:03:39.305430   65211 kubeadm.go:309] 
	I0318 22:03:39.305545   65211 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:03:39.305556   65211 kubeadm.go:309] 
	I0318 22:03:39.305676   65211 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305843   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:03:39.305859   65211 cni.go:84] Creating CNI manager for ""
	I0318 22:03:39.305873   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:03:39.307416   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:03:36.323956   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:38.821180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.308819   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:03:39.375416   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:03:39.434235   65211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:03:39.434303   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.434360   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-141758 minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=embed-certs-141758 minikube.k8s.io/primary=true
	I0318 22:03:39.677778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.708540   65211 ops.go:34] apiserver oom_adj: -16
	I0318 22:03:40.178803   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:40.678832   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:37.193451   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.193667   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:40.821359   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.323788   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:41.678334   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.177921   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.678115   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.178034   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.678655   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.177993   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.678581   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.177929   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.678124   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:46.178423   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.693587   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.693965   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.195060   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:45.821472   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:47.822362   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.678288   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.178394   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.678824   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.678144   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.178090   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.678295   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.178829   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.677856   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:51.177778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.197085   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:50.693056   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:51.192418   65699 pod_ready.go:81] duration metric: took 4m0.006727095s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:51.192452   65699 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0318 22:03:51.192462   65699 pod_ready.go:38] duration metric: took 4m5.551753918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:51.192480   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:51.192514   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:51.192574   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:51.248553   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:51.248575   65699 cri.go:89] found id: ""
	I0318 22:03:51.248583   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:51.248634   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.254205   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:51.254270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:51.303508   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.303534   65699 cri.go:89] found id: ""
	I0318 22:03:51.303543   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:51.303600   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.310160   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:51.310212   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:51.357409   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:51.357429   65699 cri.go:89] found id: ""
	I0318 22:03:51.357436   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:51.357480   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.362683   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:51.362744   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:51.413520   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.413550   65699 cri.go:89] found id: ""
	I0318 22:03:51.413560   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:51.413619   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.419412   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:51.419483   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:51.468338   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:51.468365   65699 cri.go:89] found id: ""
	I0318 22:03:51.468374   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:51.468432   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.474006   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:51.474070   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:51.520166   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:51.520188   65699 cri.go:89] found id: ""
	I0318 22:03:51.520195   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:51.520246   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.526087   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:51.526148   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:51.570735   65699 cri.go:89] found id: ""
	I0318 22:03:51.570761   65699 logs.go:276] 0 containers: []
	W0318 22:03:51.570772   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:51.570779   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:51.570832   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:51.678380   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.178543   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.677807   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.814739   65211 kubeadm.go:1107] duration metric: took 13.380493852s to wait for elevateKubeSystemPrivileges
	W0318 22:03:52.814773   65211 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:03:52.814782   65211 kubeadm.go:393] duration metric: took 5m15.94869953s to StartCluster
	I0318 22:03:52.814803   65211 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.814883   65211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:03:52.816928   65211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.817192   65211 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:03:52.818800   65211 out.go:177] * Verifying Kubernetes components...
	I0318 22:03:52.817486   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:03:52.817499   65211 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:03:52.820175   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:03:52.818838   65211 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-141758"
	I0318 22:03:52.820277   65211 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-141758"
	W0318 22:03:52.820288   65211 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:03:52.818844   65211 addons.go:69] Setting metrics-server=true in profile "embed-certs-141758"
	I0318 22:03:52.820369   65211 addons.go:234] Setting addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:52.818848   65211 addons.go:69] Setting default-storageclass=true in profile "embed-certs-141758"
	I0318 22:03:52.820429   65211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-141758"
	I0318 22:03:52.820317   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	W0318 22:03:52.820386   65211 addons.go:243] addon metrics-server should already be in state true
	I0318 22:03:52.820697   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.820821   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820846   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.820872   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820899   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.821079   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.821107   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.839829   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0318 22:03:52.839850   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0318 22:03:52.839992   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840448   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840986   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841010   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841124   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841144   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841148   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841162   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841385   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841428   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841557   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841639   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.842001   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842043   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.842049   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842068   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.845295   65211 addons.go:234] Setting addon default-storageclass=true in "embed-certs-141758"
	W0318 22:03:52.845315   65211 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:03:52.845343   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.845692   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.845736   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.864111   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0318 22:03:52.864141   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0318 22:03:52.864614   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.864688   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.865181   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865199   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865318   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865334   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865556   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866107   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.866147   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.866343   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866630   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.868253   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.870076   65211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:03:52.871315   65211 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:52.871333   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:03:52.871352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.873922   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 22:03:52.874420   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.874924   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.874944   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.875080   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.875194   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.875254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.875346   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.875478   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.875718   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.875733   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.876060   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.876234   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.877582   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.879040   65211 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:03:50.320724   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.321791   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:54.821845   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.880124   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:03:52.880135   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:03:52.880152   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.882530   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.882957   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.882979   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.883230   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.883371   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.883507   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.883638   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.886181   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0318 22:03:52.886563   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.887043   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.887064   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.887416   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.887599   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.888998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.889490   65211 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:52.889504   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:03:52.889519   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.891985   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892380   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.892435   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.892776   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.892949   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.893066   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:53.047557   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:03:53.098470   65211 node_ready.go:35] waiting up to 6m0s for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111074   65211 node_ready.go:49] node "embed-certs-141758" has status "Ready":"True"
	I0318 22:03:53.111093   65211 node_ready.go:38] duration metric: took 12.593803ms for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111102   65211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:53.127297   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:53.167460   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:03:53.167476   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:03:53.199789   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:53.221070   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:53.233431   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:03:53.233452   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:03:53.298339   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:53.298368   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:03:53.415046   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:55.057164   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85734001s)
	I0318 22:03:55.057233   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057252   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.057590   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057601   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.057614   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057634   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057888   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057929   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.064097   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.064111   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.064376   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.064402   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.064418   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.138948   65211 pod_ready.go:92] pod "coredns-5dd5756b68-k675p" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.138968   65211 pod_ready.go:81] duration metric: took 2.011647544s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.138976   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150187   65211 pod_ready.go:92] pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.150204   65211 pod_ready.go:81] duration metric: took 11.222328ms for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150213   65211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157054   65211 pod_ready.go:92] pod "etcd-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.157073   65211 pod_ready.go:81] duration metric: took 6.853876ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157086   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.167962   65211 pod_ready.go:92] pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.167986   65211 pod_ready.go:81] duration metric: took 10.892042ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.168000   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177187   65211 pod_ready.go:92] pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.177204   65211 pod_ready.go:81] duration metric: took 9.197593ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177213   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.515883   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.294780085s)
	I0318 22:03:55.515937   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.515948   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.515952   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.100869127s)
	I0318 22:03:55.515994   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516014   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516301   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516378   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516469   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516481   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516491   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516406   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516451   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516665   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516683   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516691   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516772   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516839   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516867   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519334   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.519340   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.519355   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519365   65211 addons.go:470] Verifying addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:55.520941   65211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 22:03:55.522318   65211 addons.go:505] duration metric: took 2.704813533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 22:03:55.545590   65211 pod_ready.go:92] pod "kube-proxy-jltc7" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.545614   65211 pod_ready.go:81] duration metric: took 368.395697ms for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.545625   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932726   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.932750   65211 pod_ready.go:81] duration metric: took 387.117475ms for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932757   65211 pod_ready.go:38] duration metric: took 2.821645915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:55.932771   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:55.932815   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.969924   65211 api_server.go:72] duration metric: took 3.152691986s to wait for apiserver process to appear ...
	I0318 22:03:55.969955   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.969977   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 22:03:55.976004   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 22:03:55.977450   65211 api_server.go:141] control plane version: v1.28.4
	I0318 22:03:55.977489   65211 api_server.go:131] duration metric: took 7.525909ms to wait for apiserver health ...
	I0318 22:03:55.977499   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:56.138403   65211 system_pods.go:59] 9 kube-system pods found
	I0318 22:03:56.138429   65211 system_pods.go:61] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.138434   65211 system_pods.go:61] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.138438   65211 system_pods.go:61] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.138441   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.138444   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.138448   65211 system_pods.go:61] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.138453   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.138462   65211 system_pods.go:61] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.138519   65211 system_pods.go:61] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:03:56.138532   65211 system_pods.go:74] duration metric: took 161.01924ms to wait for pod list to return data ...
	I0318 22:03:56.138544   65211 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:03:56.331884   65211 default_sa.go:45] found service account: "default"
	I0318 22:03:56.331926   65211 default_sa.go:55] duration metric: took 193.36174ms for default service account to be created ...
	I0318 22:03:56.331937   65211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:03:56.536411   65211 system_pods.go:86] 9 kube-system pods found
	I0318 22:03:56.536443   65211 system_pods.go:89] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.536452   65211 system_pods.go:89] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.536459   65211 system_pods.go:89] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.536466   65211 system_pods.go:89] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.536472   65211 system_pods.go:89] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.536479   65211 system_pods.go:89] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.536486   65211 system_pods.go:89] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.536497   65211 system_pods.go:89] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.536507   65211 system_pods.go:89] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Running
	I0318 22:03:56.536518   65211 system_pods.go:126] duration metric: took 204.57366ms to wait for k8s-apps to be running ...
	I0318 22:03:56.536531   65211 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:03:56.536579   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:56.557315   65211 system_svc.go:56] duration metric: took 20.775851ms WaitForService to wait for kubelet
	I0318 22:03:56.557344   65211 kubeadm.go:576] duration metric: took 3.740121987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:03:56.557375   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:03:51.614216   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:51.614235   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.614239   65699 cri.go:89] found id: ""
	I0318 22:03:51.614245   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:51.614297   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.619100   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.623808   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:51.623827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:51.780027   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:51.780067   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.842134   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:51.842167   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.889769   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:51.889797   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.942502   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:51.942543   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:52.467986   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:52.468043   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:52.518980   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:52.519023   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:52.536546   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:52.536586   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:52.591854   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:52.591894   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:52.640783   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:52.640818   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:52.687934   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:52.687967   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:52.749690   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:52.749726   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:52.807019   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:52.807064   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:55.392930   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.415406   65699 api_server.go:72] duration metric: took 4m15.533409678s to wait for apiserver process to appear ...
	I0318 22:03:55.415435   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.415472   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:55.415523   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:55.474200   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:55.474227   65699 cri.go:89] found id: ""
	I0318 22:03:55.474237   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:55.474295   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.479787   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:55.479907   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:55.532114   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:55.532136   65699 cri.go:89] found id: ""
	I0318 22:03:55.532145   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:55.532202   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.537215   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:55.537270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:55.588633   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:55.588657   65699 cri.go:89] found id: ""
	I0318 22:03:55.588666   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:55.588723   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.595711   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:55.595777   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:55.646684   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:55.646704   65699 cri.go:89] found id: ""
	I0318 22:03:55.646714   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:55.646770   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.651920   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:55.651982   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:55.694948   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:55.694975   65699 cri.go:89] found id: ""
	I0318 22:03:55.694984   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:55.695035   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.700275   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:55.700343   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:55.740536   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:55.740559   65699 cri.go:89] found id: ""
	I0318 22:03:55.740568   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:55.740618   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.745384   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:55.745446   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:55.784614   65699 cri.go:89] found id: ""
	I0318 22:03:55.784645   65699 logs.go:276] 0 containers: []
	W0318 22:03:55.784657   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:55.784664   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:55.784727   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:55.827306   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:55.827334   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:55.827341   65699 cri.go:89] found id: ""
	I0318 22:03:55.827349   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:55.827404   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.832314   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.838497   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:55.838520   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:55.857285   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:55.857319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:55.984597   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:55.984629   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:56.044283   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:56.044339   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:56.100329   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:56.100363   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:56.173231   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:56.173270   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:56.221280   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:56.221310   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:56.274110   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:56.274138   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:56.332863   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:56.332891   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:56.374289   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:56.374317   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:56.423793   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:56.423827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:56.478696   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:56.478734   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:56.518600   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:56.518627   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:56.731788   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:03:56.731810   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 22:03:56.731823   65211 node_conditions.go:105] duration metric: took 174.442649ms to run NodePressure ...
	I0318 22:03:56.731835   65211 start.go:240] waiting for startup goroutines ...
	I0318 22:03:56.731845   65211 start.go:245] waiting for cluster config update ...
	I0318 22:03:56.731857   65211 start.go:254] writing updated cluster config ...
	I0318 22:03:56.732109   65211 ssh_runner.go:195] Run: rm -f paused
	I0318 22:03:56.778660   65211 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:03:56.780431   65211 out.go:177] * Done! kubectl is now configured to use "embed-certs-141758" cluster and "default" namespace by default
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:56.814631   65170 pod_ready.go:81] duration metric: took 4m0.000725499s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:56.814661   65170 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:03:56.814684   65170 pod_ready.go:38] duration metric: took 4m11.531709977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:56.814712   65170 kubeadm.go:591] duration metric: took 4m19.482098142s to restartPrimaryControlPlane
	W0318 22:03:56.814767   65170 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:56.814797   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:59.480665   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 22:03:59.485792   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 22:03:59.487343   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:03:59.487364   65699 api_server.go:131] duration metric: took 4.071921663s to wait for apiserver health ...
	I0318 22:03:59.487375   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:59.487406   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:59.487462   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:59.540845   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:59.540872   65699 cri.go:89] found id: ""
	I0318 22:03:59.540881   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:59.540958   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.547759   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:59.547824   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:59.593015   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:59.593042   65699 cri.go:89] found id: ""
	I0318 22:03:59.593051   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:59.593106   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.598169   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:59.598233   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:59.638484   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:59.638508   65699 cri.go:89] found id: ""
	I0318 22:03:59.638517   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:59.638575   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.643353   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:59.643416   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:59.687190   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:59.687208   65699 cri.go:89] found id: ""
	I0318 22:03:59.687216   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:59.687271   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.692481   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:59.692550   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:59.735798   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:59.735824   65699 cri.go:89] found id: ""
	I0318 22:03:59.735834   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:59.735893   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.742192   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:59.742263   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:59.782961   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:59.782989   65699 cri.go:89] found id: ""
	I0318 22:03:59.783000   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:59.783060   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.788247   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:59.788325   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:59.836955   65699 cri.go:89] found id: ""
	I0318 22:03:59.836983   65699 logs.go:276] 0 containers: []
	W0318 22:03:59.836992   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:59.836998   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:59.837052   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:59.879225   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:59.879250   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:59.879255   65699 cri.go:89] found id: ""
	I0318 22:03:59.879264   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:59.879323   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.884380   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.889289   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:59.889316   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:04:00.307344   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:04:00.307389   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:04:00.325472   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:04:00.325496   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:04:00.388254   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:04:00.388288   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:04:00.430203   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:04:00.430241   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:04:00.476834   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:04:00.476861   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:04:00.532672   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:04:00.532703   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:04:00.572174   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:04:00.572202   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:04:00.624250   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:04:00.624283   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:04:00.688520   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:04:00.688551   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:04:00.764279   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:04:00.764319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:04:00.903231   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:04:00.903262   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:04:00.974836   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:04:00.974869   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:04:03.547135   65699 system_pods.go:59] 8 kube-system pods found
	I0318 22:04:03.547166   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.547172   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.547180   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.547186   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.547193   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.547198   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.547208   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.547214   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.547224   65699 system_pods.go:74] duration metric: took 4.059842092s to wait for pod list to return data ...
	I0318 22:04:03.547233   65699 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:03.554656   65699 default_sa.go:45] found service account: "default"
	I0318 22:04:03.554682   65699 default_sa.go:55] duration metric: took 7.437557ms for default service account to be created ...
	I0318 22:04:03.554692   65699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:03.562342   65699 system_pods.go:86] 8 kube-system pods found
	I0318 22:04:03.562369   65699 system_pods.go:89] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.562374   65699 system_pods.go:89] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.562378   65699 system_pods.go:89] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.562383   65699 system_pods.go:89] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.562387   65699 system_pods.go:89] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.562391   65699 system_pods.go:89] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.562397   65699 system_pods.go:89] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.562402   65699 system_pods.go:89] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.562410   65699 system_pods.go:126] duration metric: took 7.712357ms to wait for k8s-apps to be running ...
	I0318 22:04:03.562424   65699 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:03.562470   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:03.579949   65699 system_svc.go:56] duration metric: took 17.517801ms WaitForService to wait for kubelet
	I0318 22:04:03.579977   65699 kubeadm.go:576] duration metric: took 4m23.697982351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:03.579993   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:03.585009   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:03.585037   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:03.585049   65699 node_conditions.go:105] duration metric: took 5.050614ms to run NodePressure ...
	I0318 22:04:03.585063   65699 start.go:240] waiting for startup goroutines ...
	I0318 22:04:03.585075   65699 start.go:245] waiting for cluster config update ...
	I0318 22:04:03.585089   65699 start.go:254] writing updated cluster config ...
	I0318 22:04:03.585426   65699 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:03.634969   65699 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:04:03.637561   65699 out.go:177] * Done! kubectl is now configured to use "no-preload-963041" cluster and "default" namespace by default
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:29.143869   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.329052492s)
	I0318 22:04:29.143935   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:29.161708   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:04:29.173738   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:04:29.185221   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:04:29.185241   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 22:04:29.185273   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 22:04:29.196326   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:04:29.196382   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:04:29.207305   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 22:04:29.217759   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:04:29.217811   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:04:29.228350   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.239148   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:04:29.239191   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.251191   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 22:04:29.262291   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:04:29.262339   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:04:29.273343   65170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:04:29.332561   65170 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:04:29.333329   65170 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:04:29.496432   65170 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:04:29.496558   65170 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:04:29.496720   65170 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:04:29.728202   65170 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:04:29.730047   65170 out.go:204]   - Generating certificates and keys ...
	I0318 22:04:29.730126   65170 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:04:29.730202   65170 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:04:29.730297   65170 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:04:29.730669   65170 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:04:29.731209   65170 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:04:29.731887   65170 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:04:29.732569   65170 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:04:29.733362   65170 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:04:29.734045   65170 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:04:29.734477   65170 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:04:29.735264   65170 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:04:29.735340   65170 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:04:30.122363   65170 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:04:30.296021   65170 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:04:30.555774   65170 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:04:30.674403   65170 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:04:30.674943   65170 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:04:30.677509   65170 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:04:30.679219   65170 out.go:204]   - Booting up control plane ...
	I0318 22:04:30.679319   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:04:30.679402   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:04:30.681975   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:04:30.701015   65170 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:04:30.701902   65170 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:04:30.702104   65170 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:04:30.843019   65170 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:04:36.846312   65170 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002976 seconds
	I0318 22:04:36.846520   65170 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:04:36.870892   65170 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:04:37.410373   65170 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:04:37.410649   65170 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-660775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:04:37.935730   65170 kubeadm.go:309] [bootstrap-token] Using token: jwgiie.tp4r5ug6emevtbxj
	I0318 22:04:37.937024   65170 out.go:204]   - Configuring RBAC rules ...
	I0318 22:04:37.937156   65170 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:04:37.943204   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:04:37.951400   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:04:37.958005   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:04:37.962013   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:04:37.965783   65170 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:04:37.985150   65170 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:04:38.241561   65170 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:04:38.355495   65170 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:04:38.356452   65170 kubeadm.go:309] 
	I0318 22:04:38.356511   65170 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:04:38.356520   65170 kubeadm.go:309] 
	I0318 22:04:38.356598   65170 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:04:38.356609   65170 kubeadm.go:309] 
	I0318 22:04:38.356667   65170 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:04:38.356774   65170 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:04:38.356828   65170 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:04:38.356844   65170 kubeadm.go:309] 
	I0318 22:04:38.356898   65170 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:04:38.356916   65170 kubeadm.go:309] 
	I0318 22:04:38.356976   65170 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:04:38.356984   65170 kubeadm.go:309] 
	I0318 22:04:38.357030   65170 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:04:38.357093   65170 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:04:38.357161   65170 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:04:38.357168   65170 kubeadm.go:309] 
	I0318 22:04:38.357263   65170 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:04:38.357364   65170 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:04:38.357376   65170 kubeadm.go:309] 
	I0318 22:04:38.357495   65170 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.357657   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:04:38.357707   65170 kubeadm.go:309] 	--control-plane 
	I0318 22:04:38.357724   65170 kubeadm.go:309] 
	I0318 22:04:38.357861   65170 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:04:38.357873   65170 kubeadm.go:309] 
	I0318 22:04:38.357986   65170 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.358144   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:04:38.358726   65170 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:38.358772   65170 cni.go:84] Creating CNI manager for ""
	I0318 22:04:38.358789   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:04:38.360246   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:04:38.361264   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:04:38.378420   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:04:38.482111   65170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:04:38.482178   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:38.482194   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-660775 minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=default-k8s-diff-port-660775 minikube.k8s.io/primary=true
	I0318 22:04:38.617420   65170 ops.go:34] apiserver oom_adj: -16
	I0318 22:04:38.828087   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.328292   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.828411   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.328829   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.828338   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.329118   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.828239   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.328296   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.828241   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.329151   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.829036   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.328224   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.828465   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.328632   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.828289   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.328321   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.828493   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.329008   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.828789   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.328727   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.829024   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.329010   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.828311   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.328474   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.445593   65170 kubeadm.go:1107] duration metric: took 11.963480655s to wait for elevateKubeSystemPrivileges
	W0318 22:04:50.445640   65170 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:04:50.445651   65170 kubeadm.go:393] duration metric: took 5m13.168616417s to StartCluster
	I0318 22:04:50.445672   65170 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.445754   65170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:04:50.447789   65170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.448086   65170 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:04:50.449989   65170 out.go:177] * Verifying Kubernetes components...
	I0318 22:04:50.448238   65170 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:04:50.450030   65170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450044   65170 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450068   65170 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-660775"
	I0318 22:04:50.450070   65170 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.450078   65170 addons.go:243] addon storage-provisioner should already be in state true
	W0318 22:04:50.450082   65170 addons.go:243] addon metrics-server should already be in state true
	I0318 22:04:50.450105   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450116   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450033   65170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450181   65170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-660775"
	I0318 22:04:50.450493   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450516   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450585   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450628   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.448310   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:04:50.452465   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:04:50.466764   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0318 22:04:50.468214   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0318 22:04:50.468460   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.468676   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469019   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469038   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469182   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469195   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469254   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0318 22:04:50.469549   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.469605   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469603   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470035   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.470053   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.470320   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470350   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470381   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470385   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470395   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470535   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.473854   65170 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.473879   65170 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:04:50.473907   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.474268   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.474301   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.485707   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0318 22:04:50.486097   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0318 22:04:50.486278   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486675   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486809   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.486818   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487074   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.487086   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487345   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487513   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487561   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.487759   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.489284   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.491084   65170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:04:50.489730   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.492156   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0318 22:04:50.492539   65170 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.492549   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:04:50.492563   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.494057   65170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:04:50.492998   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.495232   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:04:50.495253   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:04:50.495275   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.495863   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.495887   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.495952   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.496340   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496476   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.496620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.496757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.496861   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.497350   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.498004   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.498047   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.498450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.499027   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499235   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.499406   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.499565   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.499691   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.515126   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0318 22:04:50.515913   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.516473   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.516498   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.516800   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.517008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.518559   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.518811   65170 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.518825   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:04:50.518842   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.522625   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523156   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.523537   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523810   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.523984   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.524193   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.524430   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.682066   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:04:50.699269   65170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709309   65170 node_ready.go:49] node "default-k8s-diff-port-660775" has status "Ready":"True"
	I0318 22:04:50.709330   65170 node_ready.go:38] duration metric: took 10.026001ms for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709342   65170 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.713958   65170 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720434   65170 pod_ready.go:92] pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.720459   65170 pod_ready.go:81] duration metric: took 6.477329ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720471   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725799   65170 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.725820   65170 pod_ready.go:81] duration metric: took 5.341405ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725829   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.730987   65170 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.731006   65170 pod_ready.go:81] duration metric: took 5.171376ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.731016   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737458   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.737481   65170 pod_ready.go:81] duration metric: took 6.458242ms for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737490   65170 pod_ready.go:38] duration metric: took 28.137606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.737506   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:04:50.737560   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:04:50.757770   65170 api_server.go:72] duration metric: took 309.622189ms to wait for apiserver process to appear ...
	I0318 22:04:50.757795   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:04:50.757815   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 22:04:50.765732   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 22:04:50.769202   65170 api_server.go:141] control plane version: v1.28.4
	I0318 22:04:50.769228   65170 api_server.go:131] duration metric: took 11.424563ms to wait for apiserver health ...
	I0318 22:04:50.769238   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:04:50.831223   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.859994   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:04:50.860014   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:04:50.864994   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.905212   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:04:50.905257   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:04:50.918389   65170 system_pods.go:59] 4 kube-system pods found
	I0318 22:04:50.918416   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:50.918422   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:50.918426   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:50.918429   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:50.918435   65170 system_pods.go:74] duration metric: took 149.190745ms to wait for pod list to return data ...
	I0318 22:04:50.918442   65170 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:50.993150   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:50.993174   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:04:51.056974   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:51.124585   65170 default_sa.go:45] found service account: "default"
	I0318 22:04:51.124612   65170 default_sa.go:55] duration metric: took 206.163161ms for default service account to be created ...
	I0318 22:04:51.124624   65170 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:51.347373   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.347408   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347419   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347426   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.347433   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.347440   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.347452   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.347458   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.347478   65170 retry.go:31] will retry after 201.830143ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.556559   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.556594   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556605   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556621   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.556630   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.556638   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.556648   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.556663   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.556681   65170 retry.go:31] will retry after 312.139871ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.878515   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.878546   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878554   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878562   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.878568   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.878573   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.878579   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.878582   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.878596   65170 retry.go:31] will retry after 379.864885ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.364944   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.364971   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364979   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364987   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.364995   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.365002   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.365011   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:52.365018   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.365039   65170 retry.go:31] will retry after 598.040475ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.752856   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921596456s)
	I0318 22:04:52.752915   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.752928   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753278   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753303   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.753314   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.753323   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753565   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753580   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.781081   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.781102   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.781396   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.781417   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.973228   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.973256   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Running
	I0318 22:04:52.973262   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Running
	I0318 22:04:52.973269   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.973275   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.973282   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.973289   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Running
	I0318 22:04:52.973295   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.973304   65170 system_pods.go:126] duration metric: took 1.848673952s to wait for k8s-apps to be running ...
	I0318 22:04:52.973310   65170 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:52.973361   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:53.343164   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.286142485s)
	I0318 22:04:53.343193   65170 system_svc.go:56] duration metric: took 369.874916ms WaitForService to wait for kubelet
	I0318 22:04:53.343215   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343229   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343216   65170 kubeadm.go:576] duration metric: took 2.89507195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:53.343238   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:53.343265   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478242665s)
	I0318 22:04:53.343301   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343311   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343510   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.343555   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.343564   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.343572   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343580   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345065   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345078   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345082   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345065   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345094   65170 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-660775"
	I0318 22:04:53.345094   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345117   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345127   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.345136   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345401   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345400   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345419   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.347668   65170 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0318 22:04:53.348839   65170 addons.go:505] duration metric: took 2.900603006s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0318 22:04:53.363245   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:53.363274   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:53.363307   65170 node_conditions.go:105] duration metric: took 20.053581ms to run NodePressure ...
	I0318 22:04:53.363325   65170 start.go:240] waiting for startup goroutines ...
	I0318 22:04:53.363339   65170 start.go:245] waiting for cluster config update ...
	I0318 22:04:53.363353   65170 start.go:254] writing updated cluster config ...
	I0318 22:04:53.363674   65170 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:53.429018   65170 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:04:53.430584   65170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-660775" cluster and "default" namespace by default
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 22:12:58 embed-certs-141758 crio[705]: time="2024-03-18 22:12:58.996624066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799978996596014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6aa9459-b307-4338-bb29-7878407d4c4f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:58 embed-certs-141758 crio[705]: time="2024-03-18 22:12:58.997683675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d8a4af7-9307-4e9a-9196-120a9681ca6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:58 embed-certs-141758 crio[705]: time="2024-03-18 22:12:58.997735515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d8a4af7-9307-4e9a-9196-120a9681ca6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:58 embed-certs-141758 crio[705]: time="2024-03-18 22:12:58.998113241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,PodSandboxId:602595fb2f2c5d54885221d7860233eb18e6f916bfea6fa242d4ec6e4143cd74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799436002382334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,PodSandboxId:e59b154fdcb690ee5d092c750083610ce85f7171cfda4cdfb540cf6728009f2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433963369730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,PodSandboxId:67e1403b890e34cea4c112f7c9b20ebe59252d39d0b8e60b19028835d36a4d5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799433600202876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64
02012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,PodSandboxId:bd90f5c62c645ba5c0297a07dbf8cff27017e9ac21595aaeaae0aa7861a72bca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433718407713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531c
f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,PodSandboxId:0901292a3664b1a15050f2dd3a9e84867cbac9c631a16d17612f801c38244946,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710799413035311980,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,PodSandboxId:975040c926ee84eefbd295605d433a35e871f4aaefd61a2d5ea419abf184fb9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799413028361652,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,PodSandboxId:211dd5511e79238af3341b082f48c5b2112b8b5252e8137cd108209115817a4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799413044255337,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,PodSandboxId:9cc6f09fa70898bc9564b713acbbc29a0abf989de26febea6d327c830d6fb059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799412900589265,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32,PodSandboxId:6cc22236962b7e07dc6152789dc9f79cddf68e0903a37dee43ceb2edb6537336,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710799119031320238,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d8a4af7-9307-4e9a-9196-120a9681ca6a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.043583715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d70a27ac-d15a-43aa-ac04-e6c45b02128a name=/runtime.v1.RuntimeService/Version
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.043694041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d70a27ac-d15a-43aa-ac04-e6c45b02128a name=/runtime.v1.RuntimeService/Version
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.045015888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=716f96e1-b0d6-4eb8-b315-8c3ac0c56735 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.045510416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799979045485213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=716f96e1-b0d6-4eb8-b315-8c3ac0c56735 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.046443256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=575b2244-4771-42dc-9657-b10fedee80d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.046524638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=575b2244-4771-42dc-9657-b10fedee80d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.046729840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,PodSandboxId:602595fb2f2c5d54885221d7860233eb18e6f916bfea6fa242d4ec6e4143cd74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799436002382334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,PodSandboxId:e59b154fdcb690ee5d092c750083610ce85f7171cfda4cdfb540cf6728009f2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433963369730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,PodSandboxId:67e1403b890e34cea4c112f7c9b20ebe59252d39d0b8e60b19028835d36a4d5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799433600202876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64
02012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,PodSandboxId:bd90f5c62c645ba5c0297a07dbf8cff27017e9ac21595aaeaae0aa7861a72bca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433718407713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531c
f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,PodSandboxId:0901292a3664b1a15050f2dd3a9e84867cbac9c631a16d17612f801c38244946,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710799413035311980,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,PodSandboxId:975040c926ee84eefbd295605d433a35e871f4aaefd61a2d5ea419abf184fb9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799413028361652,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,PodSandboxId:211dd5511e79238af3341b082f48c5b2112b8b5252e8137cd108209115817a4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799413044255337,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,PodSandboxId:9cc6f09fa70898bc9564b713acbbc29a0abf989de26febea6d327c830d6fb059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799412900589265,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32,PodSandboxId:6cc22236962b7e07dc6152789dc9f79cddf68e0903a37dee43ceb2edb6537336,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710799119031320238,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=575b2244-4771-42dc-9657-b10fedee80d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.092001748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fce66b7-6fcf-4868-9414-7ade547236f5 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.092078453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fce66b7-6fcf-4868-9414-7ade547236f5 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.094034299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8942bcd2-556b-441c-a088-c3f292048bce name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.094450400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799979094426732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8942bcd2-556b-441c-a088-c3f292048bce name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.102545505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f83909b0-b7df-49ac-8b25-1f6744b65d87 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.102617354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f83909b0-b7df-49ac-8b25-1f6744b65d87 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.102927089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,PodSandboxId:602595fb2f2c5d54885221d7860233eb18e6f916bfea6fa242d4ec6e4143cd74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799436002382334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,PodSandboxId:e59b154fdcb690ee5d092c750083610ce85f7171cfda4cdfb540cf6728009f2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433963369730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,PodSandboxId:67e1403b890e34cea4c112f7c9b20ebe59252d39d0b8e60b19028835d36a4d5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799433600202876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64
02012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,PodSandboxId:bd90f5c62c645ba5c0297a07dbf8cff27017e9ac21595aaeaae0aa7861a72bca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433718407713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531c
f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,PodSandboxId:0901292a3664b1a15050f2dd3a9e84867cbac9c631a16d17612f801c38244946,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710799413035311980,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,PodSandboxId:975040c926ee84eefbd295605d433a35e871f4aaefd61a2d5ea419abf184fb9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799413028361652,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,PodSandboxId:211dd5511e79238af3341b082f48c5b2112b8b5252e8137cd108209115817a4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799413044255337,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,PodSandboxId:9cc6f09fa70898bc9564b713acbbc29a0abf989de26febea6d327c830d6fb059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799412900589265,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32,PodSandboxId:6cc22236962b7e07dc6152789dc9f79cddf68e0903a37dee43ceb2edb6537336,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710799119031320238,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f83909b0-b7df-49ac-8b25-1f6744b65d87 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.143367436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97fddef6-d750-4082-9cae-2861d03e14e6 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.143435944Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97fddef6-d750-4082-9cae-2861d03e14e6 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.146018097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=842480bc-b6aa-4cd7-98f8-02f25813ec3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.146392056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799979146372211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=842480bc-b6aa-4cd7-98f8-02f25813ec3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.146929913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9841cc2-535f-4608-b98f-521d07b9158c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.146987855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9841cc2-535f-4608-b98f-521d07b9158c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:12:59 embed-certs-141758 crio[705]: time="2024-03-18 22:12:59.147177696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,PodSandboxId:602595fb2f2c5d54885221d7860233eb18e6f916bfea6fa242d4ec6e4143cd74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799436002382334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,PodSandboxId:e59b154fdcb690ee5d092c750083610ce85f7171cfda4cdfb540cf6728009f2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433963369730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,PodSandboxId:67e1403b890e34cea4c112f7c9b20ebe59252d39d0b8e60b19028835d36a4d5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799433600202876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64
02012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,PodSandboxId:bd90f5c62c645ba5c0297a07dbf8cff27017e9ac21595aaeaae0aa7861a72bca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433718407713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531c
f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,PodSandboxId:0901292a3664b1a15050f2dd3a9e84867cbac9c631a16d17612f801c38244946,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710799413035311980,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,PodSandboxId:975040c926ee84eefbd295605d433a35e871f4aaefd61a2d5ea419abf184fb9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799413028361652,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,PodSandboxId:211dd5511e79238af3341b082f48c5b2112b8b5252e8137cd108209115817a4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799413044255337,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,PodSandboxId:9cc6f09fa70898bc9564b713acbbc29a0abf989de26febea6d327c830d6fb059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799412900589265,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32,PodSandboxId:6cc22236962b7e07dc6152789dc9f79cddf68e0903a37dee43ceb2edb6537336,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710799119031320238,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9841cc2-535f-4608-b98f-521d07b9158c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6088a2f461a8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   602595fb2f2c5       storage-provisioner
	9c4e1201e3144       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   e59b154fdcb69       coredns-5dd5756b68-k675p
	45649fa931945       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   bd90f5c62c645       coredns-5dd5756b68-rlz67
	e06a57a587ded       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   67e1403b890e3       kube-proxy-jltc7
	736909ef62e4a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   211dd5511e792       kube-apiserver-embed-certs-141758
	9c38655e32331       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   0901292a3664b       kube-scheduler-embed-certs-141758
	15d9a82debe78       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   975040c926ee8       etcd-embed-certs-141758
	d2f95eb902268       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   9cc6f09fa7089       kube-controller-manager-embed-certs-141758
	d9a6d9741a5d9       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Exited              kube-apiserver            1                   6cc22236962b7       kube-apiserver-embed-certs-141758
	
	
	==> coredns [45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-141758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-141758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=embed-certs-141758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 22:03:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-141758
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:09:06 +0000   Mon, 18 Mar 2024 22:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:09:06 +0000   Mon, 18 Mar 2024 22:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:09:06 +0000   Mon, 18 Mar 2024 22:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:09:06 +0000   Mon, 18 Mar 2024 22:03:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    embed-certs-141758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e172d7ce4dbf4889bb535d39511e5f70
	  System UUID:                e172d7ce-4dbf-4889-bb53-5d39511e5f70
	  Boot ID:                    8756b731-c73d-436f-a7c4-89b722bcc512
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-k675p                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-5dd5756b68-rlz67                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-141758                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-141758             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-embed-certs-141758    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-jltc7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-141758             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-pmkgs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-141758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-141758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-141758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-141758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-141758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-141758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-141758 event: Registered Node embed-certs-141758 in Controller
	
	
	==> dmesg <==
	[  +0.052228] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042480] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.560295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.408996] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.433155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.922761] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.055960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059418] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.207689] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.134038] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.320089] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +5.420524] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.065989] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.004918] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +5.645939] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.406574] kauditd_printk_skb: 74 callbacks suppressed
	[Mar18 22:03] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.553931] systemd-fstab-generator[3429]: Ignoring "noauto" option for root device
	[  +7.268791] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.085166] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.874019] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.107369] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 22:04] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36] <==
	{"level":"info","ts":"2024-03-18T22:03:33.520084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 switched to configuration voters=(5579817544954101747)"}
	{"level":"info","ts":"2024-03-18T22:03:33.520204Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","added-peer-id":"4d6f7e7e767b3ff3","added-peer-peer-urls":["https://192.168.39.243:2380"]}
	{"level":"info","ts":"2024-03-18T22:03:33.5375Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T22:03:33.54036Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4d6f7e7e767b3ff3","initial-advertise-peer-urls":["https://192.168.39.243:2380"],"listen-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.243:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T22:03:33.537941Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-03-18T22:03:33.54123Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T22:03:33.543966Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-03-18T22:03:34.165916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T22:03:34.165976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T22:03:34.165992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 1"}
	{"level":"info","ts":"2024-03-18T22:03:34.166003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.166008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.166016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.166023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.169253Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:03:34.172234Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:embed-certs-141758 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T22:03:34.172284Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:03:34.175367Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T22:03:34.177922Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:03:34.178013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:03:34.179124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
	{"level":"info","ts":"2024-03-18T22:03:34.198884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T22:03:34.19896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T22:03:34.178032Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:03:34.200928Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:12:59 up 14 min,  0 users,  load average: 0.09, 0.14, 0.10
	Linux embed-certs-141758 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74] <==
	W0318 22:08:36.968704       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:08:36.968860       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:08:36.968869       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:08:36.969162       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:08:36.969345       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:08:36.970688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:09:35.850735       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:09:36.969698       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:36.969879       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:09:36.969894       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:09:36.971097       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:36.971152       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:09:36.971164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:10:35.851031       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 22:11:35.850948       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:11:36.971025       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:11:36.971215       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:11:36.971255       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:11:36.971318       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:11:36.971354       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:11:36.973420       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:12:35.850988       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-apiserver [d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32] <==
	W0318 22:03:25.908685       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:25.915247       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:25.920337       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:25.925924       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.019283       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.052188       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.083004       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.090074       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.104547       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.204406       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.217137       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.250700       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.368985       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.419754       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.445166       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.481211       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.519084       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.524030       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.540529       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.572367       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.754499       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.838170       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.943966       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.955545       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:27.166944       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344] <==
	I0318 22:07:22.356140       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:07:51.900567       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:07:52.365502       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:08:21.907052       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:08:22.374677       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:08:51.914330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:08:52.384114       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:09:21.920324       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:09:22.392747       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:09:51.926709       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:09:52.402963       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 22:09:54.328473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="347.388µs"
	I0318 22:10:07.320626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="105.14µs"
	E0318 22:10:21.934009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:10:22.413347       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:10:51.940534       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:10:52.422338       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:11:21.946912       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:11:22.431718       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:11:51.953466       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:11:52.442138       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:12:21.959544       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:12:22.451591       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:12:51.967340       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:12:52.460756       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3] <==
	I0318 22:03:54.264948       1 server_others.go:69] "Using iptables proxy"
	I0318 22:03:54.289897       1 node.go:141] Successfully retrieved node IP: 192.168.39.243
	I0318 22:03:54.353050       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 22:03:54.353101       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 22:03:54.360240       1 server_others.go:152] "Using iptables Proxier"
	I0318 22:03:54.360607       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 22:03:54.360978       1 server.go:846] "Version info" version="v1.28.4"
	I0318 22:03:54.361015       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 22:03:54.362944       1 config.go:188] "Starting service config controller"
	I0318 22:03:54.363574       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 22:03:54.363700       1 config.go:97] "Starting endpoint slice config controller"
	I0318 22:03:54.363733       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 22:03:54.366285       1 config.go:315] "Starting node config controller"
	I0318 22:03:54.366346       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 22:03:54.463782       1 shared_informer.go:318] Caches are synced for service config
	I0318 22:03:54.463935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 22:03:54.466934       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f] <==
	W0318 22:03:35.984022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 22:03:35.984056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 22:03:36.788532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 22:03:36.788662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 22:03:36.792166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 22:03:36.792230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 22:03:36.832206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:03:36.832418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:03:36.881258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 22:03:36.881312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 22:03:37.061468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 22:03:37.061537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 22:03:37.086124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 22:03:37.086157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 22:03:37.171922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 22:03:37.172120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 22:03:37.179115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 22:03:37.179251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 22:03:37.220550       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 22:03:37.220670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 22:03:37.247895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 22:03:37.248021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 22:03:37.541683       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 22:03:37.541783       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 22:03:39.175030       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:10:39 embed-certs-141758 kubelet[3762]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:10:39 embed-certs-141758 kubelet[3762]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:10:39 embed-certs-141758 kubelet[3762]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:10:39 embed-certs-141758 kubelet[3762]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:10:44 embed-certs-141758 kubelet[3762]: E0318 22:10:44.300442    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:10:55 embed-certs-141758 kubelet[3762]: E0318 22:10:55.301763    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:11:06 embed-certs-141758 kubelet[3762]: E0318 22:11:06.301212    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:11:18 embed-certs-141758 kubelet[3762]: E0318 22:11:18.301384    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:11:33 embed-certs-141758 kubelet[3762]: E0318 22:11:33.302791    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:11:39 embed-certs-141758 kubelet[3762]: E0318 22:11:39.339170    3762 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:11:39 embed-certs-141758 kubelet[3762]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:11:39 embed-certs-141758 kubelet[3762]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:11:39 embed-certs-141758 kubelet[3762]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:11:39 embed-certs-141758 kubelet[3762]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:11:47 embed-certs-141758 kubelet[3762]: E0318 22:11:47.304372    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:12:00 embed-certs-141758 kubelet[3762]: E0318 22:12:00.301895    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:12:14 embed-certs-141758 kubelet[3762]: E0318 22:12:14.300409    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:12:28 embed-certs-141758 kubelet[3762]: E0318 22:12:28.301035    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:12:39 embed-certs-141758 kubelet[3762]: E0318 22:12:39.337778    3762 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:12:39 embed-certs-141758 kubelet[3762]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:12:39 embed-certs-141758 kubelet[3762]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:12:39 embed-certs-141758 kubelet[3762]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:12:39 embed-certs-141758 kubelet[3762]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:12:42 embed-certs-141758 kubelet[3762]: E0318 22:12:42.301074    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:12:57 embed-certs-141758 kubelet[3762]: E0318 22:12:57.303139    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	
	
	==> storage-provisioner [6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985] <==
	I0318 22:03:56.158927       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:03:56.206617       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:03:56.206957       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:03:56.222360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:03:56.222615       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-141758_4cea16bc-7fbd-474e-935d-3b49dad23a05!
	I0318 22:03:56.225369       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adea3981-c21e-473a-8b08-b95449c8a583", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-141758_4cea16bc-7fbd-474e-935d-3b49dad23a05 became leader
	I0318 22:03:56.323777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-141758_4cea16bc-7fbd-474e-935d-3b49dad23a05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141758 -n embed-certs-141758
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-141758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pmkgs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-141758 describe pod metrics-server-57f55c9bc5-pmkgs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-141758 describe pod metrics-server-57f55c9bc5-pmkgs: exit status 1 (63.816587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pmkgs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-141758 describe pod metrics-server-57f55c9bc5-pmkgs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0318 22:04:07.940988   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 22:04:15.041139   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963041 -n no-preload-963041
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 22:13:04.200308172 +0000 UTC m=+6241.102096616
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-963041 logs -n 25: (2.096787852s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo cat                              | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:54:36.607114   65699 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:36.607254   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607266   65699 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:36.607272   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607706   65699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:36.608596   65699 out.go:298] Setting JSON to false
	I0318 21:54:36.609468   65699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5821,"bootTime":1710793056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:36.609529   65699 start.go:139] virtualization: kvm guest
	I0318 21:54:36.611401   65699 out.go:177] * [no-preload-963041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:36.612703   65699 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:36.612704   65699 notify.go:220] Checking for updates...
	I0318 21:54:36.613976   65699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:36.615157   65699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:36.616283   65699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:36.617431   65699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:36.618615   65699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:36.620094   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:54:36.620490   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.620537   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.634914   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0318 21:54:36.635251   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.635706   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.635728   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.636019   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.636173   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.636411   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:36.636719   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.636756   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.650608   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0318 21:54:36.650946   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.651358   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.651383   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.651694   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.651832   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.682407   65699 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:36.683826   65699 start.go:297] selected driver: kvm2
	I0318 21:54:36.683837   65699 start.go:901] validating driver "kvm2" against &{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.683941   65699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:36.684624   65699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.684696   65699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:36.699415   65699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:36.699766   65699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:36.699827   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:54:36.699840   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:36.699883   65699 start.go:340] cluster config:
	{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.699984   65699 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.701584   65699 out.go:177] * Starting "no-preload-963041" primary control-plane node in "no-preload-963041" cluster
	I0318 21:54:36.702792   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:54:36.702911   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:54:36.703027   65699 cache.go:107] acquiring lock: {Name:mk20bcc8d34b80cc44c1e33bc5e0ec5cd82ba46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703044   65699 cache.go:107] acquiring lock: {Name:mk299438a86024ea6c96280d8bbe30c1283fa996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703087   65699 cache.go:107] acquiring lock: {Name:mkf5facbc69c16807f75e75a80a4afa3f97a0ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703124   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0318 21:54:36.703127   65699 start.go:360] acquireMachinesLock for no-preload-963041: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:54:36.703141   65699 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 102.209µs
	I0318 21:54:36.703156   65699 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0318 21:54:36.703104   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 21:54:36.703174   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 21:54:36.703172   65699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.262µs
	I0318 21:54:36.703190   65699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703043   65699 cache.go:107] acquiring lock: {Name:mk4c82b4e60b551671fa99921294b8e1f551d382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703189   65699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 104.037µs
	I0318 21:54:36.703209   65699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 21:54:36.703137   65699 cache.go:107] acquiring lock: {Name:mk847ac7ddb8863389782289e61001579ff6ec5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703204   65699 cache.go:107] acquiring lock: {Name:mk1bf8cc3e30a7cf88f25697f1021501ea6ee4ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703243   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 21:54:36.703254   65699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 163.57µs
	I0318 21:54:36.703233   65699 cache.go:107] acquiring lock: {Name:mkf9c9b33c4d1ca54e3364ad39dcd3b10bc50534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703265   65699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703224   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 21:54:36.703282   65699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 247.672µs
	I0318 21:54:36.703293   65699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 21:54:36.703293   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 21:54:36.703293   65699 cache.go:107] acquiring lock: {Name:mkd0bd00e6f69df37097a8ce792bcc8844efbc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703315   65699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.33µs
	I0318 21:54:36.703329   65699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 21:54:36.703363   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 21:54:36.703385   65699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 207.404µs
	I0318 21:54:36.703400   65699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703411   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 21:54:36.703419   65699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 164.5µs
	I0318 21:54:36.703435   65699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703447   65699 cache.go:87] Successfully saved all images to host disk.
	I0318 21:54:40.421098   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:43.493261   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:49.573105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:52.645158   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:58.725124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:01.797077   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:07.877116   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:10.949096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:17.029117   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:20.101131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:26.181141   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:29.253113   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:35.333097   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:38.405132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:44.485208   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:47.557123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:53.637185   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:56.709102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:02.789134   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:05.861146   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:11.941102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:15.013092   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:21.093132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:24.165129   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:30.245127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:33.317151   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:39.397126   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:42.469163   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:48.549145   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:51.621085   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:57.701118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:00.773108   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:06.853105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:09.925096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:16.005131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:19.077111   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:25.157130   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:28.229107   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:34.309152   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:37.381127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:43.461123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:46.533127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:52.613124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:55.685135   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:01.765118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:04.837197   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:07.840986   65211 start.go:364] duration metric: took 4m36.169318619s to acquireMachinesLock for "embed-certs-141758"
	I0318 21:58:07.841046   65211 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:07.841054   65211 fix.go:54] fixHost starting: 
	I0318 21:58:07.841507   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:07.841544   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:07.856544   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0318 21:58:07.856976   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:07.857424   65211 main.go:141] libmachine: Using API Version  1
	I0318 21:58:07.857452   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:07.857783   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:07.857971   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:07.858126   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:58:07.859909   65211 fix.go:112] recreateIfNeeded on embed-certs-141758: state=Stopped err=<nil>
	I0318 21:58:07.859947   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	W0318 21:58:07.860120   65211 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:07.862134   65211 out.go:177] * Restarting existing kvm2 VM for "embed-certs-141758" ...
	I0318 21:58:07.838706   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:07.838746   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839036   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:58:07.839060   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839263   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:58:07.840867   65170 machine.go:97] duration metric: took 4m37.426711052s to provisionDockerMachine
	I0318 21:58:07.840915   65170 fix.go:56] duration metric: took 4m37.446713188s for fixHost
	I0318 21:58:07.840923   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 4m37.446748943s
	W0318 21:58:07.840945   65170 start.go:713] error starting host: provision: host is not running
	W0318 21:58:07.841017   65170 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 21:58:07.841026   65170 start.go:728] Will try again in 5 seconds ...
	I0318 21:58:07.863352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Start
	I0318 21:58:07.863483   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring networks are active...
	I0318 21:58:07.864202   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network default is active
	I0318 21:58:07.864652   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network mk-embed-certs-141758 is active
	I0318 21:58:07.865077   65211 main.go:141] libmachine: (embed-certs-141758) Getting domain xml...
	I0318 21:58:07.865858   65211 main.go:141] libmachine: (embed-certs-141758) Creating domain...
	I0318 21:58:09.026367   65211 main.go:141] libmachine: (embed-certs-141758) Waiting to get IP...
	I0318 21:58:09.027144   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.027524   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.027580   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.027503   66223 retry.go:31] will retry after 260.499882ms: waiting for machine to come up
	I0318 21:58:09.289935   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.290490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.290522   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.290450   66223 retry.go:31] will retry after 328.000758ms: waiting for machine to come up
	I0318 21:58:09.619947   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.620337   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.620384   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.620305   66223 retry.go:31] will retry after 419.640035ms: waiting for machine to come up
	I0318 21:58:10.041775   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.042186   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.042213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.042134   66223 retry.go:31] will retry after 482.732439ms: waiting for machine to come up
	I0318 21:58:10.526892   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.527282   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.527307   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.527253   66223 retry.go:31] will retry after 718.696645ms: waiting for machine to come up
	I0318 21:58:11.247165   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.247545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.247571   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.247501   66223 retry.go:31] will retry after 603.951593ms: waiting for machine to come up
	I0318 21:58:12.842928   65170 start.go:360] acquireMachinesLock for default-k8s-diff-port-660775: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:11.853119   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.853408   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.853438   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.853362   66223 retry.go:31] will retry after 1.191963995s: waiting for machine to come up
	I0318 21:58:13.046915   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:13.047289   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:13.047319   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:13.047237   66223 retry.go:31] will retry after 1.314666633s: waiting for machine to come up
	I0318 21:58:14.363693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:14.364109   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:14.364135   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:14.364064   66223 retry.go:31] will retry after 1.341191632s: waiting for machine to come up
	I0318 21:58:15.707425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:15.707921   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:15.707951   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:15.707862   66223 retry.go:31] will retry after 1.887572842s: waiting for machine to come up
	I0318 21:58:17.596545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:17.596970   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:17.597002   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:17.596899   66223 retry.go:31] will retry after 2.820006704s: waiting for machine to come up
	I0318 21:58:20.420327   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:20.420693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:20.420714   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:20.420659   66223 retry.go:31] will retry after 3.099836206s: waiting for machine to come up
	I0318 21:58:23.522155   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:23.522490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:23.522517   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:23.522450   66223 retry.go:31] will retry after 4.512794132s: waiting for machine to come up
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:28.036425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.036898   65211 main.go:141] libmachine: (embed-certs-141758) Found IP for machine: 192.168.39.243
	I0318 21:58:28.036949   65211 main.go:141] libmachine: (embed-certs-141758) Reserving static IP address...
	I0318 21:58:28.036967   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has current primary IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.037428   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.037452   65211 main.go:141] libmachine: (embed-certs-141758) DBG | skip adding static IP to network mk-embed-certs-141758 - found existing host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"}
	I0318 21:58:28.037461   65211 main.go:141] libmachine: (embed-certs-141758) Reserved static IP address: 192.168.39.243
	I0318 21:58:28.037473   65211 main.go:141] libmachine: (embed-certs-141758) Waiting for SSH to be available...
	I0318 21:58:28.037485   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Getting to WaitForSSH function...
	I0318 21:58:28.039459   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039778   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.039810   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039928   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH client type: external
	I0318 21:58:28.039955   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa (-rw-------)
	I0318 21:58:28.039995   65211 main.go:141] libmachine: (embed-certs-141758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:28.040027   65211 main.go:141] libmachine: (embed-certs-141758) DBG | About to run SSH command:
	I0318 21:58:28.040044   65211 main.go:141] libmachine: (embed-certs-141758) DBG | exit 0
	I0318 21:58:28.169219   65211 main.go:141] libmachine: (embed-certs-141758) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:28.169554   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetConfigRaw
	I0318 21:58:28.170153   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.172372   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.172760   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.172787   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.173016   65211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:58:28.173186   65211 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:28.173203   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:28.173399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.175433   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175767   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.175802   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175920   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.176079   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176389   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.176553   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.176790   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.176805   65211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:28.285370   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:28.285407   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285629   65211 buildroot.go:166] provisioning hostname "embed-certs-141758"
	I0318 21:58:28.285651   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285856   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.288382   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288708   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.288739   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288863   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.289067   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289220   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289361   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.289515   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.289717   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.289735   65211 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-141758 && echo "embed-certs-141758" | sudo tee /etc/hostname
	I0318 21:58:28.420311   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-141758
	
	I0318 21:58:28.420351   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.422864   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.423245   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423431   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.423608   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423759   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423891   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.424044   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.424234   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.424256   65211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-141758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-141758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-141758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:28.549277   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:28.549307   65211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:28.549325   65211 buildroot.go:174] setting up certificates
	I0318 21:58:28.549334   65211 provision.go:84] configureAuth start
	I0318 21:58:28.549343   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.549572   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.551881   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552183   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.552205   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.554341   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554629   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.554656   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554752   65211 provision.go:143] copyHostCerts
	I0318 21:58:28.554812   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:28.554825   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:28.554912   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:28.555020   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:28.555032   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:28.555062   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:28.555145   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:28.555155   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:28.555192   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:28.555259   65211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-141758 san=[127.0.0.1 192.168.39.243 embed-certs-141758 localhost minikube]
	I0318 21:58:28.706111   65211 provision.go:177] copyRemoteCerts
	I0318 21:58:28.706158   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:28.706185   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.708537   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708795   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.708822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.709164   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.709335   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.709446   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:28.796199   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:28.827207   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:58:28.854273   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:28.880505   65211 provision.go:87] duration metric: took 331.161751ms to configureAuth
	I0318 21:58:28.880524   65211 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:28.880716   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:58:28.880801   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.883232   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.883583   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883753   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.883926   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884087   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884186   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.884339   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.884481   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.884496   65211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:29.164330   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:29.164357   65211 machine.go:97] duration metric: took 991.159236ms to provisionDockerMachine
	I0318 21:58:29.164370   65211 start.go:293] postStartSetup for "embed-certs-141758" (driver="kvm2")
	I0318 21:58:29.164381   65211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:29.164434   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.164734   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:29.164758   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.167400   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167696   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.167719   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167867   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.168065   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.168235   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.168352   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.256141   65211 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:29.261086   65211 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:29.261104   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:29.261157   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:29.261229   65211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:29.261309   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:29.271174   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:29.297161   65211 start.go:296] duration metric: took 132.781067ms for postStartSetup
	I0318 21:58:29.297192   65211 fix.go:56] duration metric: took 21.456139061s for fixHost
	I0318 21:58:29.297208   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.299741   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300102   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.300127   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300289   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.300480   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300750   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.300864   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:29.301028   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:29.301039   65211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:29.413842   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799109.363417589
	
	I0318 21:58:29.413869   65211 fix.go:216] guest clock: 1710799109.363417589
	I0318 21:58:29.413876   65211 fix.go:229] Guest: 2024-03-18 21:58:29.363417589 +0000 UTC Remote: 2024-03-18 21:58:29.297195181 +0000 UTC m=+297.765354372 (delta=66.222408ms)
	I0318 21:58:29.413892   65211 fix.go:200] guest clock delta is within tolerance: 66.222408ms
	I0318 21:58:29.413899   65211 start.go:83] releasing machines lock for "embed-certs-141758", held for 21.572869797s
	I0318 21:58:29.413932   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.414191   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:29.416929   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417293   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.417318   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417500   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418019   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418159   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418230   65211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:29.418275   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.418330   65211 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:29.418344   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.420728   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421022   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421053   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421076   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421228   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421413   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421464   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421493   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421593   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.421673   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421749   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.421828   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421960   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.422081   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.502548   65211 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:29.531994   65211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:29.681482   65211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:29.689671   65211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:29.689735   65211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:29.711660   65211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:29.711682   65211 start.go:494] detecting cgroup driver to use...
	I0318 21:58:29.711750   65211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:29.728159   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:29.742409   65211 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:29.742450   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:29.757587   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:29.772218   65211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:29.883164   65211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:30.046773   65211 docker.go:233] disabling docker service ...
	I0318 21:58:30.046845   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:30.065878   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:30.081551   65211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:30.223188   65211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:30.353535   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:30.370291   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:30.391728   65211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:58:30.391789   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.409204   65211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:30.409281   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.426464   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.439964   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.452097   65211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:30.464410   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.475990   65211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.495092   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.506831   65211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:30.517410   65211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:30.517463   65211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:30.532465   65211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:30.543958   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:30.679788   65211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:30.839388   65211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:30.839466   65211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:30.844666   65211 start.go:562] Will wait 60s for crictl version
	I0318 21:58:30.844720   65211 ssh_runner.go:195] Run: which crictl
	I0318 21:58:30.848886   65211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:30.888598   65211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:30.888686   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.921097   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.954037   65211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:58:30.955378   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:30.958352   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.958792   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:30.958822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.959064   65211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:30.963556   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:30.977788   65211 kubeadm.go:877] updating cluster {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:30.977899   65211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:58:30.977949   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:31.018843   65211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:58:31.018926   65211 ssh_runner.go:195] Run: which lz4
	I0318 21:58:31.023589   65211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:31.028416   65211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:31.028445   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:32.944637   65211 crio.go:462] duration metric: took 1.921083293s to copy over tarball
	I0318 21:58:32.944709   65211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:35.696230   65211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751490576s)
	I0318 21:58:35.696261   65211 crio.go:469] duration metric: took 2.751600779s to extract the tarball
	I0318 21:58:35.696271   65211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:35.739467   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:35.794398   65211 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:58:35.794427   65211 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:58:35.794436   65211 kubeadm.go:928] updating node { 192.168.39.243 8443 v1.28.4 crio true true} ...
	I0318 21:58:35.794559   65211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-141758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:35.794625   65211 ssh_runner.go:195] Run: crio config
	I0318 21:58:35.844849   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:35.844877   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:35.844888   65211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:35.844923   65211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-141758 NodeName:embed-certs-141758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:58:35.845069   65211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-141758"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:35.845124   65211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:58:35.856885   65211 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:35.856950   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:35.867990   65211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 21:58:35.887057   65211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:35.909244   65211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 21:58:35.931267   65211 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:35.935793   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:35.950323   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:36.093377   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:36.112548   65211 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758 for IP: 192.168.39.243
	I0318 21:58:36.112575   65211 certs.go:194] generating shared ca certs ...
	I0318 21:58:36.112596   65211 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:36.112766   65211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:36.112813   65211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:36.112822   65211 certs.go:256] generating profile certs ...
	I0318 21:58:36.112943   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/client.key
	I0318 21:58:36.113043   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key.d575a4ae
	I0318 21:58:36.113097   65211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key
	I0318 21:58:36.113263   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:36.113307   65211 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:36.113322   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:36.113359   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:36.113396   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:36.113429   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:36.113536   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:36.114412   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:36.147930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:36.177554   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:36.208374   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:36.243425   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:58:36.276720   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:58:36.317930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:36.345717   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:58:36.371655   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:36.396998   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:36.422750   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:36.448117   65211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:36.466558   65211 ssh_runner.go:195] Run: openssl version
	I0318 21:58:36.472888   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:36.484389   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489534   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489585   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.496045   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:36.507723   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:36.519030   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524214   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524267   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.531109   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:36.543912   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:36.556130   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561330   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561369   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.567883   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:36.579819   65211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:36.824046   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:36.831273   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:36.838571   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:36.845621   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:36.852423   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:36.859433   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:36.866091   65211 kubeadm.go:391] StartCluster: {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:36.866212   65211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:36.866263   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:36.912390   65211 cri.go:89] found id: ""
	I0318 21:58:36.912460   65211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:36.929896   65211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:36.929923   65211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:36.929931   65211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:36.929985   65211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:36.947191   65211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:36.948613   65211 kubeconfig.go:125] found "embed-certs-141758" server: "https://192.168.39.243:8443"
	I0318 21:58:36.951641   65211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:36.966095   65211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.243
	I0318 21:58:36.966135   65211 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:36.966150   65211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:36.966216   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:37.022620   65211 cri.go:89] found id: ""
	I0318 21:58:37.022680   65211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:37.042338   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:37.054534   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:37.054552   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:37.054588   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:37.066099   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:37.066166   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:37.077340   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:37.088158   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:37.088214   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:37.099190   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.110081   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:37.110118   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.121852   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:37.133161   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:37.133215   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:37.144199   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:37.155593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.271593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.921199   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.175721   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.264478   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.377591   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:38.377683   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:38.878031   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.377859   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.417546   65211 api_server.go:72] duration metric: took 1.039957218s to wait for apiserver process to appear ...
	I0318 21:58:39.417576   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:58:39.417599   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:39.418125   65211 api_server.go:269] stopped: https://192.168.39.243:8443/healthz: Get "https://192.168.39.243:8443/healthz": dial tcp 192.168.39.243:8443: connect: connection refused
	I0318 21:58:39.917663   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.450620   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.450656   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.450668   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.489722   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.489755   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.918487   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.924551   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:42.924584   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.418077   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.424938   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:43.424969   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.918053   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.922905   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 21:58:43.931126   65211 api_server.go:141] control plane version: v1.28.4
	I0318 21:58:43.931151   65211 api_server.go:131] duration metric: took 4.513568499s to wait for apiserver health ...
	I0318 21:58:43.931159   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:43.931173   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:43.932876   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:43.934155   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:58:43.948750   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:58:43.978849   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:58:43.991046   65211 system_pods.go:59] 8 kube-system pods found
	I0318 21:58:43.991082   65211 system_pods.go:61] "coredns-5dd5756b68-r9pft" [add358cf-d544-4107-a05f-5e60542ea456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:58:43.991089   65211 system_pods.go:61] "etcd-embed-certs-141758" [31274121-ec65-46b5-bcda-65698c28bd1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:58:43.991095   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [61e4c0db-7a20-4c93-83b3-de4738e82614] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:58:43.991100   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [c2ffe900-4e3a-4c21-ae8f-cd42475207c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:58:43.991105   65211 system_pods.go:61] "kube-proxy-klmnb" [45b0c762-4eaf-4e8a-b321-0d474f61086e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:58:43.991109   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [5aeed9aa-9d98-49c0-bf8a-3998738f6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:58:43.991114   65211 system_pods.go:61] "metrics-server-57f55c9bc5-vt7hj" [949e4c0f-6a76-4141-b30c-f27291873f14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:58:43.991123   65211 system_pods.go:61] "storage-provisioner" [0aca1af6-3221-4698-915b-cabb9da662bf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:58:43.991128   65211 system_pods.go:74] duration metric: took 12.25858ms to wait for pod list to return data ...
	I0318 21:58:43.991136   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:58:43.996109   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:58:43.996135   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 21:58:43.996146   65211 node_conditions.go:105] duration metric: took 5.004614ms to run NodePressure ...
	I0318 21:58:43.996163   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:44.227606   65211 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234823   65211 kubeadm.go:733] kubelet initialised
	I0318 21:58:44.234846   65211 kubeadm.go:734] duration metric: took 7.215375ms waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234854   65211 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:58:44.241197   65211 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.248990   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249008   65211 pod_ready.go:81] duration metric: took 7.784519ms for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.249016   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249022   65211 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.254792   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254820   65211 pod_ready.go:81] duration metric: took 5.788084ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.254833   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254846   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.261248   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261272   65211 pod_ready.go:81] duration metric: took 6.415486ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.261282   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261291   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.383016   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383056   65211 pod_ready.go:81] duration metric: took 121.750871ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.383069   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383078   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784241   65211 pod_ready.go:92] pod "kube-proxy-klmnb" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:44.784264   65211 pod_ready.go:81] duration metric: took 401.177044ms for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784272   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:48.950018   65699 start.go:364] duration metric: took 4m12.246849763s to acquireMachinesLock for "no-preload-963041"
	I0318 21:58:48.950078   65699 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:48.950087   65699 fix.go:54] fixHost starting: 
	I0318 21:58:48.950522   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:48.950556   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:48.966094   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0318 21:58:48.966492   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:48.966970   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:58:48.966994   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:48.967295   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:48.967443   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:58:48.967548   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:58:48.968800   65699 fix.go:112] recreateIfNeeded on no-preload-963041: state=Stopped err=<nil>
	I0318 21:58:48.968835   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	W0318 21:58:48.969105   65699 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:48.970900   65699 out.go:177] * Restarting existing kvm2 VM for "no-preload-963041" ...
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:46.795337   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:49.295100   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.299437   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:48.972407   65699 main.go:141] libmachine: (no-preload-963041) Calling .Start
	I0318 21:58:48.972572   65699 main.go:141] libmachine: (no-preload-963041) Ensuring networks are active...
	I0318 21:58:48.973251   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network default is active
	I0318 21:58:48.973606   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network mk-no-preload-963041 is active
	I0318 21:58:48.973992   65699 main.go:141] libmachine: (no-preload-963041) Getting domain xml...
	I0318 21:58:48.974629   65699 main.go:141] libmachine: (no-preload-963041) Creating domain...
	I0318 21:58:50.190010   65699 main.go:141] libmachine: (no-preload-963041) Waiting to get IP...
	I0318 21:58:50.190750   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.191241   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.191320   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.191220   66466 retry.go:31] will retry after 238.162453ms: waiting for machine to come up
	I0318 21:58:50.430778   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.431262   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.431292   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.431191   66466 retry.go:31] will retry after 318.744541ms: waiting for machine to come up
	I0318 21:58:50.751612   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.752051   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.752086   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.752007   66466 retry.go:31] will retry after 464.29047ms: waiting for machine to come up
	I0318 21:58:51.218462   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.219034   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.219062   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.218983   66466 retry.go:31] will retry after 476.466311ms: waiting for machine to come up
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:53.792483   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.696634   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.697179   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.697208   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.697099   66466 retry.go:31] will retry after 520.896381ms: waiting for machine to come up
	I0318 21:58:52.219861   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:52.220480   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:52.220506   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:52.220414   66466 retry.go:31] will retry after 872.240898ms: waiting for machine to come up
	I0318 21:58:53.094123   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.094547   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.094580   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.094499   66466 retry.go:31] will retry after 757.325359ms: waiting for machine to come up
	I0318 21:58:53.852954   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.853422   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.853453   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.853358   66466 retry.go:31] will retry after 1.459327383s: waiting for machine to come up
	I0318 21:58:55.313969   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:55.314382   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:55.314413   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:55.314328   66466 retry.go:31] will retry after 1.373606235s: waiting for machine to come up
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:57.057776   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:57.791727   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:57.791754   65211 pod_ready.go:81] duration metric: took 13.007474768s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:57.791769   65211 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:59.800074   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:56.689643   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:56.690039   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:56.690064   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:56.690020   66466 retry.go:31] will retry after 1.905319343s: waiting for machine to come up
	I0318 21:58:58.597961   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:58.598470   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:58.598501   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:58.598420   66466 retry.go:31] will retry after 2.720364267s: waiting for machine to come up
	I0318 21:59:01.321901   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:01.322290   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:01.322312   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:01.322254   66466 retry.go:31] will retry after 2.73029124s: waiting for machine to come up
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.299143   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.800475   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.054294   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:04.054715   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:04.054752   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:04.054671   66466 retry.go:31] will retry after 3.148777081s: waiting for machine to come up
	I0318 21:59:08.706453   65170 start.go:364] duration metric: took 55.86344587s to acquireMachinesLock for "default-k8s-diff-port-660775"
	I0318 21:59:08.706504   65170 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:59:08.706515   65170 fix.go:54] fixHost starting: 
	I0318 21:59:08.706934   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:08.706970   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:08.723564   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0318 21:59:08.723935   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:08.724359   65170 main.go:141] libmachine: Using API Version  1
	I0318 21:59:08.724381   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:08.724671   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:08.724874   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:08.725045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:59:08.726635   65170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-660775: state=Stopped err=<nil>
	I0318 21:59:08.726656   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	W0318 21:59:08.726813   65170 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:59:08.728839   65170 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-660775" ...
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.730181   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Start
	I0318 21:59:08.730374   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring networks are active...
	I0318 21:59:08.731140   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network default is active
	I0318 21:59:08.731488   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network mk-default-k8s-diff-port-660775 is active
	I0318 21:59:08.731850   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Getting domain xml...
	I0318 21:59:08.732544   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Creating domain...
	I0318 21:59:10.014924   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting to get IP...
	I0318 21:59:10.015822   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016215   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016299   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.016206   66608 retry.go:31] will retry after 301.369371ms: waiting for machine to come up
	I0318 21:59:07.205807   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206239   65699 main.go:141] libmachine: (no-preload-963041) Found IP for machine: 192.168.72.84
	I0318 21:59:07.206266   65699 main.go:141] libmachine: (no-preload-963041) Reserving static IP address...
	I0318 21:59:07.206281   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has current primary IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206636   65699 main.go:141] libmachine: (no-preload-963041) Reserved static IP address: 192.168.72.84
	I0318 21:59:07.206659   65699 main.go:141] libmachine: (no-preload-963041) Waiting for SSH to be available...
	I0318 21:59:07.206686   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.206711   65699 main.go:141] libmachine: (no-preload-963041) DBG | skip adding static IP to network mk-no-preload-963041 - found existing host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"}
	I0318 21:59:07.206728   65699 main.go:141] libmachine: (no-preload-963041) DBG | Getting to WaitForSSH function...
	I0318 21:59:07.208790   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209157   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.209202   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209306   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH client type: external
	I0318 21:59:07.209331   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa (-rw-------)
	I0318 21:59:07.209367   65699 main.go:141] libmachine: (no-preload-963041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:07.209381   65699 main.go:141] libmachine: (no-preload-963041) DBG | About to run SSH command:
	I0318 21:59:07.209395   65699 main.go:141] libmachine: (no-preload-963041) DBG | exit 0
	I0318 21:59:07.337357   65699 main.go:141] libmachine: (no-preload-963041) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:07.337688   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetConfigRaw
	I0318 21:59:07.338258   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.340609   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.340957   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.340996   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.341213   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:59:07.341396   65699 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:07.341462   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:07.341668   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.343956   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344275   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.344311   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344395   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.344580   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344756   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344891   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.345086   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.345264   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.345276   65699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:07.457491   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:07.457543   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457778   65699 buildroot.go:166] provisioning hostname "no-preload-963041"
	I0318 21:59:07.457802   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457975   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.460729   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461120   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.461145   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461286   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.461480   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461643   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461797   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.461980   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.462179   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.462193   65699 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-963041 && echo "no-preload-963041" | sudo tee /etc/hostname
	I0318 21:59:07.592194   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-963041
	
	I0318 21:59:07.592219   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.594794   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595141   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.595177   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595305   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.595484   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595673   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595836   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.595987   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.596144   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.596160   65699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-963041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-963041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-963041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:07.719593   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:07.719622   65699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:07.719655   65699 buildroot.go:174] setting up certificates
	I0318 21:59:07.719667   65699 provision.go:84] configureAuth start
	I0318 21:59:07.719681   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.719928   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.722544   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.722907   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.722935   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.723095   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.725108   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725391   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.725420   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725522   65699 provision.go:143] copyHostCerts
	I0318 21:59:07.725582   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:07.725595   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:07.725665   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:07.725780   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:07.725792   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:07.725817   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:07.725874   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:07.725881   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:07.725898   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:07.725945   65699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.no-preload-963041 san=[127.0.0.1 192.168.72.84 localhost minikube no-preload-963041]
	I0318 21:59:07.893632   65699 provision.go:177] copyRemoteCerts
	I0318 21:59:07.893685   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:07.893711   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.896227   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896501   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.896527   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896692   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.896859   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.897035   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.897205   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:07.983501   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:08.014432   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:59:08.043755   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:59:08.074388   65699 provision.go:87] duration metric: took 354.707214ms to configureAuth
	I0318 21:59:08.074413   65699 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:08.074571   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:08.074638   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.077314   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077658   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.077690   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077837   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.077996   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078150   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078289   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.078435   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.078582   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.078596   65699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:08.446711   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:08.446745   65699 machine.go:97] duration metric: took 1.105332987s to provisionDockerMachine
	I0318 21:59:08.446757   65699 start.go:293] postStartSetup for "no-preload-963041" (driver="kvm2")
	I0318 21:59:08.446772   65699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:08.446787   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.447090   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:08.447118   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.449551   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.449917   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.449955   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.450117   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.450308   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.450471   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.450611   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.542283   65699 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:08.547389   65699 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:08.547423   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:08.547501   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:08.547606   65699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:08.547732   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:08.558721   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:08.586136   65699 start.go:296] duration metric: took 139.367706ms for postStartSetup
	I0318 21:59:08.586177   65699 fix.go:56] duration metric: took 19.636089577s for fixHost
	I0318 21:59:08.586201   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.588809   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589192   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.589219   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589435   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.589604   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589731   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589838   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.589972   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.590182   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.590197   65699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:08.706260   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799148.650279332
	
	I0318 21:59:08.706283   65699 fix.go:216] guest clock: 1710799148.650279332
	I0318 21:59:08.706293   65699 fix.go:229] Guest: 2024-03-18 21:59:08.650279332 +0000 UTC Remote: 2024-03-18 21:59:08.586181408 +0000 UTC m=+272.029432082 (delta=64.097924ms)
	I0318 21:59:08.706337   65699 fix.go:200] guest clock delta is within tolerance: 64.097924ms
	I0318 21:59:08.706350   65699 start.go:83] releasing machines lock for "no-preload-963041", held for 19.756290817s
	I0318 21:59:08.706384   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.706707   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:08.709113   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709389   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.709417   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709561   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710009   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710155   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710229   65699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:08.710278   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.710330   65699 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:08.710349   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.713131   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713154   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713464   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713492   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713521   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713536   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713632   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713739   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713824   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713987   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713988   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714117   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.714177   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714337   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.827151   65699 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:08.833847   65699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:08.985638   65699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:08.992294   65699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:08.992372   65699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:09.009419   65699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:09.009444   65699 start.go:494] detecting cgroup driver to use...
	I0318 21:59:09.009509   65699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:09.031942   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:09.051842   65699 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:09.051901   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:09.068136   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:09.084445   65699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:09.234323   65699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:09.402144   65699 docker.go:233] disabling docker service ...
	I0318 21:59:09.402210   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:09.419960   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:09.434836   65699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:09.572242   65699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:09.718817   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:09.734607   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:09.756470   65699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:09.756533   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.768595   65699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:09.768685   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.780726   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.800700   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.817396   65699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:09.829896   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.842211   65699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.867273   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.880909   65699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:09.893254   65699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:09.893297   65699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:09.910897   65699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:09.922400   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:10.065248   65699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:10.223498   65699 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:10.223577   65699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:10.230686   65699 start.go:562] Will wait 60s for crictl version
	I0318 21:59:10.230752   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.235527   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:10.278655   65699 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:10.278756   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.310992   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.344925   65699 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:59:07.298973   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:09.799803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:10.346255   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:10.349081   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349418   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:10.349437   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349657   65699 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:10.354793   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:10.369744   65699 kubeadm.go:877] updating cluster {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:10.369893   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:59:10.369951   65699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:10.409975   65699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 21:59:10.410001   65699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:59:10.410062   65699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.410074   65699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.410086   65699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.410122   65699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.410148   65699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.410166   65699 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 21:59:10.410213   65699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.410223   65699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411690   65699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.411695   65699 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 21:59:10.411730   65699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.411747   65699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.411764   65699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.411793   65699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.553195   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 21:59:10.553249   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.555774   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.559123   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.562266   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.571390   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.592690   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.702213   65699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 21:59:10.702265   65699 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.702314   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857028   65699 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 21:59:10.857072   65699 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.857087   65699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 21:59:10.857117   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857146   65699 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.857154   65699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 21:59:10.857180   65699 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.857197   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857214   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857211   65699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 21:59:10.857250   65699 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.857254   65699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 21:59:10.857264   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.857275   65699 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.857282   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857305   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.872164   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.872195   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.872268   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.927043   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.927147   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.927095   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.927219   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.972625   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 21:59:10.972740   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:11.016239   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 21:59:11.016291   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.016356   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:11.016380   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.047703   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 21:59:11.047732   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047784   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047849   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.047952   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.069007   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 21:59:11.069064   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 21:59:11.069095   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 21:59:11.069126   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 21:59:11.069139   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.319858   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320279   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320310   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.320224   66608 retry.go:31] will retry after 253.332307ms: waiting for machine to come up
	I0318 21:59:10.575748   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576242   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576271   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.576194   66608 retry.go:31] will retry after 484.439329ms: waiting for machine to come up
	I0318 21:59:11.061837   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062291   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.062247   66608 retry.go:31] will retry after 520.757249ms: waiting for machine to come up
	I0318 21:59:11.585112   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585541   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585571   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.585485   66608 retry.go:31] will retry after 482.335377ms: waiting for machine to come up
	I0318 21:59:12.068813   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069420   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069456   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:12.069374   66608 retry.go:31] will retry after 936.563875ms: waiting for machine to come up
	I0318 21:59:13.007582   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.007986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.008012   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.007945   66608 retry.go:31] will retry after 864.468016ms: waiting for machine to come up
	I0318 21:59:13.874400   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874910   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874942   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.874875   66608 retry.go:31] will retry after 1.239808671s: waiting for machine to come up
	I0318 21:59:15.116440   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116834   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116855   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:15.116784   66608 retry.go:31] will retry after 1.208141339s: waiting for machine to come up
	I0318 21:59:11.804059   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:14.301199   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:16.301517   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:11.928081   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.330891   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.28291236s)
	I0318 21:59:14.330933   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330948   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.261785854s)
	I0318 21:59:14.330971   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330974   65699 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.402863992s)
	I0318 21:59:14.330979   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.283167958s)
	I0318 21:59:14.330996   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 21:59:14.331011   65699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 21:59:14.331019   65699 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331043   65699 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.331064   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331086   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:14.336430   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327415   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:16.327350   66608 retry.go:31] will retry after 2.24875206s: waiting for machine to come up
	I0318 21:59:18.578068   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578644   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:18.578589   66608 retry.go:31] will retry after 2.267791851s: waiting for machine to come up
	I0318 21:59:18.800406   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:20.800524   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:18.591731   65699 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.255273393s)
	I0318 21:59:18.591789   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 21:59:18.591897   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:18.591937   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.260848845s)
	I0318 21:59:18.591958   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 21:59:18.591986   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:18.592046   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:19.859577   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.267508443s)
	I0318 21:59:19.859608   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 21:59:19.859637   65699 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:19.859641   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.267714811s)
	I0318 21:59:19.859674   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 21:59:19.859685   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.847586   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848099   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848135   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:20.848048   66608 retry.go:31] will retry after 2.918466892s: waiting for machine to come up
	I0318 21:59:23.768491   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.768999   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.769030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:23.768962   66608 retry.go:31] will retry after 4.373256501s: waiting for machine to come up
	I0318 21:59:22.800765   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:24.801392   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:21.944666   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.084944906s)
	I0318 21:59:21.944700   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 21:59:21.944720   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:21.944766   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:24.714752   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.769964684s)
	I0318 21:59:24.714793   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 21:59:24.714827   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:24.714884   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.146019   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146507   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Found IP for machine: 192.168.50.150
	I0318 21:59:28.146533   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserving static IP address...
	I0318 21:59:28.146549   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has current primary IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146939   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.146966   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserved static IP address: 192.168.50.150
	I0318 21:59:28.146986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | skip adding static IP to network mk-default-k8s-diff-port-660775 - found existing host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"}
	I0318 21:59:28.147006   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Getting to WaitForSSH function...
	I0318 21:59:28.147030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for SSH to be available...
	I0318 21:59:28.149408   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149771   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.149799   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149929   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH client type: external
	I0318 21:59:28.149978   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa (-rw-------)
	I0318 21:59:28.150020   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:28.150039   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | About to run SSH command:
	I0318 21:59:28.150050   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | exit 0
	I0318 21:59:28.273437   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:28.273768   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetConfigRaw
	I0318 21:59:28.274402   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.277330   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.277757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277997   65170 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:59:28.278217   65170 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:28.278240   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:28.278435   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.280754   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281149   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.281178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281318   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.281495   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281646   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281796   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.281955   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.282163   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.282185   65170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:28.390614   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:28.390642   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.390896   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:59:28.390923   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.391095   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.394421   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.394838   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.394876   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.395178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.395410   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395775   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.395953   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.396145   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.396160   65170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-660775 && echo "default-k8s-diff-port-660775" | sudo tee /etc/hostname
	I0318 21:59:28.522303   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-660775
	
	I0318 21:59:28.522347   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.525224   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.525667   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525789   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.525961   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526122   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.526471   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.526651   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.526676   65170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-660775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-660775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-660775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:28.641488   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:28.641521   65170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:28.641547   65170 buildroot.go:174] setting up certificates
	I0318 21:59:28.641555   65170 provision.go:84] configureAuth start
	I0318 21:59:28.641564   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.641871   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.644934   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.645301   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645425   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.647753   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648089   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.648119   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648360   65170 provision.go:143] copyHostCerts
	I0318 21:59:28.648423   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:28.648435   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:28.648507   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:28.648620   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:28.648631   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:28.648660   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:28.648731   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:28.648740   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:28.648769   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:28.648829   65170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-660775 san=[127.0.0.1 192.168.50.150 default-k8s-diff-port-660775 localhost minikube]
	I0318 21:59:28.697191   65170 provision.go:177] copyRemoteCerts
	I0318 21:59:28.697253   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:28.697274   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.699919   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700237   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.700269   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700477   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.700694   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.700882   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.701060   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:28.793840   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:28.829285   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 21:59:28.857628   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:59:28.886344   65170 provision.go:87] duration metric: took 244.778215ms to configureAuth
	I0318 21:59:28.886366   65170 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:28.886527   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:59:28.886593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.889885   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890321   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.890351   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890534   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.890721   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.890879   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.891013   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.891190   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.891366   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.891399   65170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:29.189002   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:29.189033   65170 machine.go:97] duration metric: took 910.801375ms to provisionDockerMachine
	I0318 21:59:29.189046   65170 start.go:293] postStartSetup for "default-k8s-diff-port-660775" (driver="kvm2")
	I0318 21:59:29.189058   65170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:29.189083   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.189409   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:29.189438   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.192164   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.192512   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.192866   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.193045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.193190   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.277850   65170 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:29.282886   65170 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:29.282909   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:29.282975   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:29.283065   65170 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:29.283172   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:29.296052   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:29.323906   65170 start.go:296] duration metric: took 134.847993ms for postStartSetup
	I0318 21:59:29.323945   65170 fix.go:56] duration metric: took 20.61742941s for fixHost
	I0318 21:59:29.323969   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.326616   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.326920   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.327063   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.327300   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327472   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327622   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.327853   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:29.328058   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:29.328070   65170 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:29.430348   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799169.377980776
	
	I0318 21:59:29.430377   65170 fix.go:216] guest clock: 1710799169.377980776
	I0318 21:59:29.430386   65170 fix.go:229] Guest: 2024-03-18 21:59:29.377980776 +0000 UTC Remote: 2024-03-18 21:59:29.323950953 +0000 UTC m=+359.071824665 (delta=54.029823ms)
	I0318 21:59:29.430411   65170 fix.go:200] guest clock delta is within tolerance: 54.029823ms
	I0318 21:59:29.430420   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 20.723939352s
	I0318 21:59:29.430450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.430727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:29.433339   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433686   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.433713   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433865   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434308   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434531   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434632   65170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:29.434682   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.434783   65170 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:29.434811   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.437380   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437479   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437731   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437760   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437829   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437880   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.438033   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438170   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438244   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438332   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438393   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438603   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.438694   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.540670   65170 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:29.547318   65170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:29.704221   65170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:29.710762   65170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:29.710832   65170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:29.727820   65170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:29.727838   65170 start.go:494] detecting cgroup driver to use...
	I0318 21:59:29.727905   65170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:29.745750   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:29.760984   65170 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:29.761024   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:29.776639   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:29.791749   65170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:29.914380   65170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:30.096200   65170 docker.go:233] disabling docker service ...
	I0318 21:59:30.096281   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:30.112512   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:30.126090   65170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:30.258617   65170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:30.397700   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:30.420478   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:30.443197   65170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:30.443282   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.455577   65170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:30.455630   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.467898   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.480041   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.492501   65170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:30.505178   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.517657   65170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.537376   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.554749   65170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:30.570281   65170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:30.570352   65170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:30.587991   65170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:30.600354   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:30.744678   65170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:30.902192   65170 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:30.902279   65170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:30.907869   65170 start.go:562] Will wait 60s for crictl version
	I0318 21:59:30.907937   65170 ssh_runner.go:195] Run: which crictl
	I0318 21:59:30.913588   65170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:30.957344   65170 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:30.957431   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:30.991141   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:31.024452   65170 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:59:27.301221   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:29.799576   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:26.781379   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.066468133s)
	I0318 21:59:26.781415   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 21:59:26.781445   65699 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:26.781493   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:27.747707   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 21:59:27.747764   65699 cache_images.go:123] Successfully loaded all cached images
	I0318 21:59:27.747769   65699 cache_images.go:92] duration metric: took 17.337757279s to LoadCachedImages
	I0318 21:59:27.747781   65699 kubeadm.go:928] updating node { 192.168.72.84 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:59:27.747907   65699 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-963041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:27.747986   65699 ssh_runner.go:195] Run: crio config
	I0318 21:59:27.810020   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:27.810048   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:27.810060   65699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:27.810078   65699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963041 NodeName:no-preload-963041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:27.810242   65699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963041"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:27.810327   65699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:59:27.823120   65699 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:27.823172   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:27.834742   65699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:59:27.854365   65699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:59:27.872873   65699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 21:59:27.891245   65699 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:27.895305   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:27.907928   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:28.044997   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:28.064471   65699 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041 for IP: 192.168.72.84
	I0318 21:59:28.064489   65699 certs.go:194] generating shared ca certs ...
	I0318 21:59:28.064503   65699 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:28.064668   65699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:28.064733   65699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:28.064747   65699 certs.go:256] generating profile certs ...
	I0318 21:59:28.064847   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/client.key
	I0318 21:59:28.064927   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key.53f57e82
	I0318 21:59:28.064975   65699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key
	I0318 21:59:28.065090   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:28.065140   65699 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:28.065154   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:28.065190   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:28.065218   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:28.065244   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:28.065292   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:28.066189   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:28.108239   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:28.147385   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:28.191255   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:28.231079   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:59:28.269730   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:59:28.302326   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:28.331762   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:28.359487   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:28.390196   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:28.422323   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:28.452212   65699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:28.476910   65699 ssh_runner.go:195] Run: openssl version
	I0318 21:59:28.483480   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:28.495230   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500728   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500771   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.507487   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:28.520368   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:28.533700   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540767   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540817   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.549380   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:28.566307   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:28.582377   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589139   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589192   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.597396   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:28.610189   65699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:28.616488   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:28.625547   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:28.634680   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:28.643077   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:28.652470   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:28.660641   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:28.669216   65699 kubeadm.go:391] StartCluster: {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:28.669342   65699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:28.669444   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.719357   65699 cri.go:89] found id: ""
	I0318 21:59:28.719427   65699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:28.733158   65699 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:28.733179   65699 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:28.733186   65699 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:28.733234   65699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:28.744804   65699 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:28.745805   65699 kubeconfig.go:125] found "no-preload-963041" server: "https://192.168.72.84:8443"
	I0318 21:59:28.747888   65699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:28.757871   65699 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.84
	I0318 21:59:28.757896   65699 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:28.757918   65699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:28.757964   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.805988   65699 cri.go:89] found id: ""
	I0318 21:59:28.806057   65699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:28.829257   65699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:28.841515   65699 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:28.841543   65699 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:28.841594   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:59:28.853433   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:28.853499   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:28.864593   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:59:28.875236   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:28.875285   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:28.887756   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.898219   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:28.898271   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.909308   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:59:28.919480   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:28.919540   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:28.930305   65699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:28.941125   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:29.056129   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.261585   65699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.205423679s)
	I0318 21:59:30.261614   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.498583   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.589160   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.713046   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:30.713150   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.214160   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.025614   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:31.028381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028758   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:31.028783   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028960   65170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:31.033836   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:31.048652   65170 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:31.048798   65170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:59:31.048853   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:31.089246   65170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:59:31.089322   65170 ssh_runner.go:195] Run: which lz4
	I0318 21:59:31.094026   65170 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:59:31.098900   65170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:59:31.098929   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:59:33.166556   65170 crio.go:462] duration metric: took 2.072562246s to copy over tarball
	I0318 21:59:33.166639   65170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:59:31.810567   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:34.301018   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:36.346463   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:31.714009   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.762157   65699 api_server.go:72] duration metric: took 1.049110677s to wait for apiserver process to appear ...
	I0318 21:59:31.762188   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:31.762210   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:31.762737   65699 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	I0318 21:59:32.263205   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.738750   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.738785   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.738802   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.804061   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.804102   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.804116   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.842097   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.842144   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:35.262351   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.267395   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.267439   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:35.763016   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.775072   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.775109   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.262338   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:36.267165   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:36.267207   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.762879   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.074225   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:37.074263   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:37.262637   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.267514   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 21:59:37.275551   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 21:59:37.275579   65699 api_server.go:131] duration metric: took 5.513383348s to wait for apiserver health ...
	I0318 21:59:37.275590   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:37.275598   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:37.496330   65699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:37.641915   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:37.659277   65699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:37.684019   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:38.075296   65699 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:38.075333   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:38.075353   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:38.075367   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:38.075375   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:38.075388   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:38.075407   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:38.075418   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:38.075429   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:38.075440   65699 system_pods.go:74] duration metric: took 391.399859ms to wait for pod list to return data ...
	I0318 21:59:38.075452   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:38.252627   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:38.252659   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:38.252670   65699 node_conditions.go:105] duration metric: took 177.209294ms to run NodePressure ...
	I0318 21:59:38.252692   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.662257   65699 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670807   65699 kubeadm.go:733] kubelet initialised
	I0318 21:59:38.670836   65699 kubeadm.go:734] duration metric: took 8.550399ms waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670846   65699 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:38.680740   65699 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.689134   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689157   65699 pod_ready.go:81] duration metric: took 8.393104ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.689169   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689178   65699 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.693796   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693815   65699 pod_ready.go:81] duration metric: took 4.628403ms for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.693824   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693829   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.701225   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701245   65699 pod_ready.go:81] duration metric: took 7.410052ms for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.701254   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701262   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.707848   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707871   65699 pod_ready.go:81] duration metric: took 6.598987ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.707882   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707889   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.066641   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066668   65699 pod_ready.go:81] duration metric: took 358.769058ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.066679   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066687   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.466406   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466440   65699 pod_ready.go:81] duration metric: took 399.746217ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.466449   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466455   65699 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.866206   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866232   65699 pod_ready.go:81] duration metric: took 399.76891ms for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.866240   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866247   65699 pod_ready.go:38] duration metric: took 1.195391629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:39.866263   65699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:59:39.879772   65699 ops.go:34] apiserver oom_adj: -16
	I0318 21:59:39.879796   65699 kubeadm.go:591] duration metric: took 11.146603139s to restartPrimaryControlPlane
	I0318 21:59:39.879807   65699 kubeadm.go:393] duration metric: took 11.21059758s to StartCluster
	I0318 21:59:39.879825   65699 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.879915   65699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:59:39.881739   65699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.881970   65699 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:59:39.883934   65699 out.go:177] * Verifying Kubernetes components...
	I0318 21:59:39.882064   65699 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:59:39.882254   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:39.885913   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:39.885924   65699 addons.go:69] Setting metrics-server=true in profile "no-preload-963041"
	I0318 21:59:39.885932   65699 addons.go:69] Setting default-storageclass=true in profile "no-preload-963041"
	I0318 21:59:39.885950   65699 addons.go:234] Setting addon metrics-server=true in "no-preload-963041"
	W0318 21:59:39.885958   65699 addons.go:243] addon metrics-server should already be in state true
	I0318 21:59:39.885966   65699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963041"
	I0318 21:59:39.885918   65699 addons.go:69] Setting storage-provisioner=true in profile "no-preload-963041"
	I0318 21:59:39.885985   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886000   65699 addons.go:234] Setting addon storage-provisioner=true in "no-preload-963041"
	W0318 21:59:39.886052   65699 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:59:39.886075   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886384   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886403   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886437   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886392   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886448   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886438   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.902103   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0318 21:59:39.902574   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.903192   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.903211   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.903568   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.904113   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.904142   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.908122   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 21:59:39.908269   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0318 21:59:39.908566   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.908639   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.909237   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.909251   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.909662   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.909834   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.913534   65699 addons.go:234] Setting addon default-storageclass=true in "no-preload-963041"
	W0318 21:59:39.913558   65699 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:59:39.913586   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.913959   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.913992   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.921260   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.921284   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.921661   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.922725   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.922778   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.925575   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0318 21:59:39.926170   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.926799   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.926819   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.933014   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0318 21:59:39.933066   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.934464   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.934527   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.935441   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.935456   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.936236   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.936821   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.936870   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.936983   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.938986   65699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:39.940103   65699 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:39.940115   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:59:39.940128   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.942712   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943138   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.943168   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943415   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.943574   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.943690   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.943828   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.944813   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0318 21:59:39.961605   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.962117   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.962140   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.962564   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.962745   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.964606   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.970697   65699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.222479   65170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055805805s)
	I0318 21:59:36.222507   65170 crio.go:469] duration metric: took 3.055923767s to extract the tarball
	I0318 21:59:36.222515   65170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:59:36.265990   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:36.314679   65170 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:59:36.314704   65170 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:59:36.314714   65170 kubeadm.go:928] updating node { 192.168.50.150 8444 v1.28.4 crio true true} ...
	I0318 21:59:36.314828   65170 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-660775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:36.314900   65170 ssh_runner.go:195] Run: crio config
	I0318 21:59:36.375889   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:36.375908   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:36.375916   65170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:36.375935   65170 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-660775 NodeName:default-k8s-diff-port-660775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:36.376058   65170 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-660775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:36.376117   65170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:59:36.387851   65170 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:36.387905   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:36.398095   65170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0318 21:59:36.416507   65170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:59:36.437165   65170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0318 21:59:36.458125   65170 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:36.462688   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:36.476913   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:36.629523   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:36.648679   65170 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775 for IP: 192.168.50.150
	I0318 21:59:36.648697   65170 certs.go:194] generating shared ca certs ...
	I0318 21:59:36.648717   65170 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:36.648870   65170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:36.648942   65170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:36.648956   65170 certs.go:256] generating profile certs ...
	I0318 21:59:36.649061   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/client.key
	I0318 21:59:36.649136   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key.6eb93750
	I0318 21:59:36.649181   65170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key
	I0318 21:59:36.649342   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:36.649408   65170 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:36.649427   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:36.649465   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:36.649502   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:36.649524   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:36.649563   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:36.650116   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:36.709130   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:36.777530   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:36.822349   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:36.861155   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 21:59:36.899264   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:59:36.930697   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:36.960715   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:36.992062   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:37.020001   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:37.051443   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:37.080115   65170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:37.102221   65170 ssh_runner.go:195] Run: openssl version
	I0318 21:59:37.111020   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:37.127447   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132675   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132730   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.139092   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:37.151349   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:37.166470   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172601   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172656   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.179404   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:37.192628   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:37.206758   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211839   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211882   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.218285   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:37.230291   65170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:37.235312   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:37.242399   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:37.249658   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:37.256458   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:37.263110   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:37.270329   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:37.277040   65170 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:37.277140   65170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:37.277176   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.320525   65170 cri.go:89] found id: ""
	I0318 21:59:37.320595   65170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:37.332584   65170 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:37.332602   65170 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:37.332608   65170 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:37.332678   65170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:37.348017   65170 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:37.349557   65170 kubeconfig.go:125] found "default-k8s-diff-port-660775" server: "https://192.168.50.150:8444"
	I0318 21:59:37.352826   65170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:37.367223   65170 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.150
	I0318 21:59:37.367256   65170 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:37.367267   65170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:37.367315   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.411319   65170 cri.go:89] found id: ""
	I0318 21:59:37.411401   65170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:37.431545   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:37.442587   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:37.442610   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:37.442661   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 21:59:37.452384   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:37.452439   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:37.462519   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 21:59:37.472669   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:37.472728   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:37.483107   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.493177   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:37.493224   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.503546   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 21:59:37.513471   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:37.513512   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:37.524147   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:37.534940   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:37.665308   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.882330   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216992532s)
	I0318 21:59:38.882356   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.110948   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.217267   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.332300   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:39.332389   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.833190   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.972027   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 21:59:39.972078   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 21:59:39.972109   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.975122   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975608   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.975627   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975994   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.976196   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.976371   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.976663   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.982859   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0318 21:59:39.983263   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.983860   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.983904   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.984308   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.984558   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.986338   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.986645   65699 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:39.986690   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:59:39.986718   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.989398   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989741   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.989999   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989951   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.990229   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.990392   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.990517   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:40.115233   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:40.136271   65699 node_ready.go:35] waiting up to 6m0s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:40.232668   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:40.234394   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 21:59:40.234417   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 21:59:40.256237   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:40.301845   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 21:59:40.301873   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 21:59:40.354405   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:40.354435   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 21:59:40.377996   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:41.389416   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.156705132s)
	I0318 21:59:41.389429   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133120616s)
	I0318 21:59:41.389470   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389475   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389482   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389486   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389763   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389783   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389792   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389799   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389828   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.389874   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389890   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389899   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389938   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.390199   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390398   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.390339   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390375   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390451   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390470   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.397714   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.397736   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.397951   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.397999   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.398017   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.415620   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037584799s)
	I0318 21:59:41.415673   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.415684   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.415964   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.415992   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.416007   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416016   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.416027   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.416207   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.416220   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416229   65699 addons.go:470] Verifying addon metrics-server=true in "no-preload-963041"
	I0318 21:59:41.418761   65699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 21:59:38.798943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:40.800913   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:41.420038   65699 addons.go:505] duration metric: took 1.537986468s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 21:59:40.332810   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.411342   65170 api_server.go:72] duration metric: took 1.079036948s to wait for apiserver process to appear ...
	I0318 21:59:40.411371   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:40.411394   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:40.411932   65170 api_server.go:269] stopped: https://192.168.50.150:8444/healthz: Get "https://192.168.50.150:8444/healthz": dial tcp 192.168.50.150:8444: connect: connection refused
	I0318 21:59:40.911545   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.377410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.377443   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.377471   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.426410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.426468   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.426485   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.448464   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.448523   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:43.912498   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.918271   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.918309   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.411824   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.422200   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:44.422223   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.911509   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.916884   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 21:59:44.928835   65170 api_server.go:141] control plane version: v1.28.4
	I0318 21:59:44.928862   65170 api_server.go:131] duration metric: took 4.517483413s to wait for apiserver health ...
	I0318 21:59:44.928872   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:44.928881   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:44.930794   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.932164   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:44.959217   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:45.002449   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:45.017348   65170 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:45.017394   65170 system_pods.go:61] "coredns-5dd5756b68-cjq2v" [9ae899ef-63e4-407d-9013-71552ec87614] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:45.017407   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [286b98ba-bc9e-4e2f-984c-d7b2447aef15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:45.017417   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [7a0db461-f8d5-4331-993e-d7b9345159e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:45.017428   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [e4f5859a-dfcc-41d8-9a17-acb601449821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:45.017443   65170 system_pods.go:61] "kube-proxy-qt2m6" [c3c7c6db-4935-4079-b0e7-60ba2cd886b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:45.017450   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [7115eef0-5ff4-4dfe-9135-88ad8f698e43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:45.017461   65170 system_pods.go:61] "metrics-server-57f55c9bc5-5dtf5" [b19191ee-e2db-4392-82e2-1a95fae76101] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:45.017489   65170 system_pods.go:61] "storage-provisioner" [045d4b30-47a3-4c80-a9e8-c36ef7395e6c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:45.017498   65170 system_pods.go:74] duration metric: took 15.027239ms to wait for pod list to return data ...
	I0318 21:59:45.017511   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:45.020962   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:45.020982   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:45.020991   65170 node_conditions.go:105] duration metric: took 3.47292ms to run NodePressure ...
	I0318 21:59:45.021007   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:45.277662   65170 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282939   65170 kubeadm.go:733] kubelet initialised
	I0318 21:59:45.282958   65170 kubeadm.go:734] duration metric: took 5.277143ms waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282965   65170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.289546   65170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:43.299509   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:45.300875   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:42.142145   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:44.641863   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:45.640660   65699 node_ready.go:49] node "no-preload-963041" has status "Ready":"True"
	I0318 21:59:45.640686   65699 node_ready.go:38] duration metric: took 5.50437071s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:45.640697   65699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.647087   65699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652062   65699 pod_ready.go:92] pod "coredns-76f75df574-6mtzp" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.652081   65699 pod_ready.go:81] duration metric: took 4.969873ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652091   65699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.296790   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298834   65170 pod_ready.go:81] duration metric: took 9.259848ms for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.298849   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298868   65170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.307325   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307367   65170 pod_ready.go:81] duration metric: took 8.486967ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.307380   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307389   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.319473   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319498   65170 pod_ready.go:81] duration metric: took 12.100242ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.319514   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319522   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.407356   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407379   65170 pod_ready.go:81] duration metric: took 87.846686ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.407390   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407395   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806835   65170 pod_ready.go:92] pod "kube-proxy-qt2m6" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.806866   65170 pod_ready.go:81] duration metric: took 399.462221ms for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806878   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:47.814286   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:47.799616   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:50.300118   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:46.659819   65699 pod_ready.go:92] pod "etcd-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:46.659855   65699 pod_ready.go:81] duration metric: took 1.007755238s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:46.659868   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:48.669033   65699 pod_ready.go:102] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:51.168202   65699 pod_ready.go:92] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.168229   65699 pod_ready.go:81] duration metric: took 4.508354098s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.168240   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174243   65699 pod_ready.go:92] pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.174268   65699 pod_ready.go:81] duration metric: took 6.018685ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174280   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179279   65699 pod_ready.go:92] pod "kube-proxy-kkrzx" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.179300   65699 pod_ready.go:81] duration metric: took 5.012711ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179311   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185651   65699 pod_ready.go:92] pod "kube-scheduler-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.185670   65699 pod_ready.go:81] duration metric: took 6.351567ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185678   65699 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.315135   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.814432   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.798645   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:54.800561   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:53.191834   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.192346   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 21:59:55.313448   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.813863   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:56.813885   65170 pod_ready.go:81] duration metric: took 11.006997984s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:56.813893   65170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:58.820535   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.802709   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:59.299235   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:01.299761   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:57.694309   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:00.192594   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:00.821070   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.821300   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:03.301922   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.799844   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.693235   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:04.693324   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:05.322655   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.820557   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.821227   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.799881   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.802803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:06.694723   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:11.821593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:13.821792   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.299680   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.300073   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:11.693179   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.194532   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.822593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.321691   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.798687   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.802403   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.302525   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.692709   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.692875   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:20.693620   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:20.820297   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:22.821250   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:24.825337   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.800104   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:26.299105   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.193665   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.194188   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:27.321417   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:29.326924   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:28.798399   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.800456   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:27.692517   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.194633   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:31.821301   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:34.322210   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:33.299227   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.800314   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:32.693924   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.191865   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:36.324995   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.820800   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.298887   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.299570   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.193636   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:39.693273   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:40.821224   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:43.319945   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.300484   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.300924   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.302264   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.192508   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.192743   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:45.321277   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:47.821205   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.822803   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:48.799135   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.801798   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.692554   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.193102   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:51.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:52.320478   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:54.321815   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.301031   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:55.799954   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.693639   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.193657   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.820977   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.822348   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:57.804405   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.299016   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.692166   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.692526   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:01.319955   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:03.324127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.299297   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:04.299721   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.694116   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.193971   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.821257   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.320779   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:06.799599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.800534   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.301906   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:07.195710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:09.691770   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:10.321379   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:12.321708   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.324578   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:13.303344   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:15.799107   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.692795   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.192711   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:16.192974   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:16.820809   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.820856   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:17.800874   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.301479   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.691838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.692582   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:20.821237   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.320180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:22.302559   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:24.800442   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.193491   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:25.692671   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:25.322575   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.323097   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.821127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.302001   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.799369   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.693253   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.192347   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:32.324221   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.821547   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.301599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.798017   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.692835   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.693519   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:37.323580   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.325038   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.798108   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:38.799072   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:40.799797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.694495   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.192489   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:41.193100   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:41.821043   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:44.322040   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:42.802267   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.301098   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:43.692766   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.693595   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.322167   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.322495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:47.800006   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.298196   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.193259   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.195154   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:50.821598   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:53.321546   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.302659   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:54.799292   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.692794   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.192968   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.821471   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.822495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.299408   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.299964   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.193704   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.194959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:00.321126   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:02.321899   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.798818   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:03.800605   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:05.801459   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.693520   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.192449   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:06.192843   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:06.821775   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:09.322004   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.299572   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:10.799620   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.693106   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.192341   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.821139   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.821610   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.299590   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.303956   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.692089   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.693750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:16.320827   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:18.321654   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.800563   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.300888   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.694101   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.191376   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:20.322586   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.820488   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.820555   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.798797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.799501   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.192760   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.193279   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:26.821222   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.321682   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:27.302631   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.799175   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.693239   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.192207   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.193072   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:31.820868   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.822075   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.802719   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:34.301730   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.692959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.194871   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.320427   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.321178   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.799943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:39.300691   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.699873   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.193658   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:40.321388   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:42.822538   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.799159   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.800027   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.299156   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.693317   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.194419   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:45.322637   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:47.820481   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.300259   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.798429   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.693209   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.693611   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:50.322017   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.820364   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:54.820692   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.799942   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.301665   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.694058   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.194171   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:56.821255   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:58.828524   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.303516   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.791851   65211 pod_ready.go:81] duration metric: took 4m0.000068811s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	E0318 22:02:57.791889   65211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:02:57.791913   65211 pod_ready.go:38] duration metric: took 4m13.55705031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:02:57.791938   65211 kubeadm.go:591] duration metric: took 4m20.862001116s to restartPrimaryControlPlane
	W0318 22:02:57.792000   65211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:02:57.792027   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:02:57.692975   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:59.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:01.321045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:03.322008   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.694585   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:04.195750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:05.322087   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:07.820940   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:09.822652   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:06.693803   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:08.693856   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:10.694375   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:12.321504   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:14.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:13.192173   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:15.193816   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:16.822327   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.322059   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:17.691761   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.691867   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.322674   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.823374   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.692710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.695045   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.192838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.322370   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:28.820807   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.165008   65211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.372946393s)
	I0318 22:03:30.165087   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:30.184259   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:30.198417   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:30.210595   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:30.210624   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:30.210675   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:30.222159   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:30.222210   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:30.234099   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:30.244546   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:30.244621   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:30.255192   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.265777   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:30.265833   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.276674   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:30.286349   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:30.286402   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:30.296530   65211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:30.522414   65211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:03:28.193120   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.194300   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:31.321986   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:33.823045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:32.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:34.693824   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.294937   65211 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:03:39.295015   65211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:39.295142   65211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:39.295296   65211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:39.295451   65211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:39.295550   65211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:39.297047   65211 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:39.297135   65211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:39.297250   65211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:39.297368   65211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:39.297461   65211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:39.297557   65211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:39.297640   65211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:39.297742   65211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:39.297831   65211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:39.297939   65211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:39.298032   65211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:39.298084   65211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:39.298206   65211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:39.298301   65211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:39.298376   65211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:39.298451   65211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:39.298518   65211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:39.298612   65211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:39.298693   65211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:39.299829   65211 out.go:204]   - Booting up control plane ...
	I0318 22:03:39.299959   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:39.300052   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:39.300150   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:39.300308   65211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:39.300444   65211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:39.300496   65211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:39.300713   65211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:39.300829   65211 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003359 seconds
	I0318 22:03:39.300997   65211 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:03:39.301155   65211 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:03:39.301228   65211 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:03:39.301451   65211 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-141758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:03:39.301526   65211 kubeadm.go:309] [bootstrap-token] Using token: p114v6.erax4pf5xkn6x2it
	I0318 22:03:39.302903   65211 out.go:204]   - Configuring RBAC rules ...
	I0318 22:03:39.303025   65211 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:03:39.303133   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:03:39.303301   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:03:39.303479   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:03:39.303574   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:03:39.303651   65211 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:03:39.303810   65211 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:03:39.303886   65211 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:03:39.303960   65211 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:03:39.303972   65211 kubeadm.go:309] 
	I0318 22:03:39.304041   65211 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:03:39.304050   65211 kubeadm.go:309] 
	I0318 22:03:39.304158   65211 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:03:39.304173   65211 kubeadm.go:309] 
	I0318 22:03:39.304208   65211 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:03:39.304292   65211 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:03:39.304368   65211 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:03:39.304377   65211 kubeadm.go:309] 
	I0318 22:03:39.304456   65211 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:03:39.304465   65211 kubeadm.go:309] 
	I0318 22:03:39.304547   65211 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:03:39.304570   65211 kubeadm.go:309] 
	I0318 22:03:39.304649   65211 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:03:39.304754   65211 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:03:39.304861   65211 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:03:39.304878   65211 kubeadm.go:309] 
	I0318 22:03:39.305028   65211 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:03:39.305134   65211 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:03:39.305144   65211 kubeadm.go:309] 
	I0318 22:03:39.305248   65211 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305390   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:03:39.305422   65211 kubeadm.go:309] 	--control-plane 
	I0318 22:03:39.305430   65211 kubeadm.go:309] 
	I0318 22:03:39.305545   65211 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:03:39.305556   65211 kubeadm.go:309] 
	I0318 22:03:39.305676   65211 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305843   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:03:39.305859   65211 cni.go:84] Creating CNI manager for ""
	I0318 22:03:39.305873   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:03:39.307416   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:03:36.323956   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:38.821180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.308819   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:03:39.375416   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:03:39.434235   65211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:03:39.434303   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.434360   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-141758 minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=embed-certs-141758 minikube.k8s.io/primary=true
	I0318 22:03:39.677778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.708540   65211 ops.go:34] apiserver oom_adj: -16
	I0318 22:03:40.178803   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:40.678832   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:37.193451   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.193667   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:40.821359   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.323788   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:41.678334   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.177921   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.678115   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.178034   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.678655   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.177993   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.678581   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.177929   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.678124   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:46.178423   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.693587   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.693965   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.195060   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:45.821472   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:47.822362   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.678288   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.178394   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.678824   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.678144   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.178090   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.678295   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.178829   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.677856   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:51.177778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.197085   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:50.693056   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:51.192418   65699 pod_ready.go:81] duration metric: took 4m0.006727095s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:51.192452   65699 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0318 22:03:51.192462   65699 pod_ready.go:38] duration metric: took 4m5.551753918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:51.192480   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:51.192514   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:51.192574   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:51.248553   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:51.248575   65699 cri.go:89] found id: ""
	I0318 22:03:51.248583   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:51.248634   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.254205   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:51.254270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:51.303508   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.303534   65699 cri.go:89] found id: ""
	I0318 22:03:51.303543   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:51.303600   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.310160   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:51.310212   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:51.357409   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:51.357429   65699 cri.go:89] found id: ""
	I0318 22:03:51.357436   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:51.357480   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.362683   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:51.362744   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:51.413520   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.413550   65699 cri.go:89] found id: ""
	I0318 22:03:51.413560   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:51.413619   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.419412   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:51.419483   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:51.468338   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:51.468365   65699 cri.go:89] found id: ""
	I0318 22:03:51.468374   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:51.468432   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.474006   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:51.474070   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:51.520166   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:51.520188   65699 cri.go:89] found id: ""
	I0318 22:03:51.520195   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:51.520246   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.526087   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:51.526148   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:51.570735   65699 cri.go:89] found id: ""
	I0318 22:03:51.570761   65699 logs.go:276] 0 containers: []
	W0318 22:03:51.570772   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:51.570779   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:51.570832   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:51.678380   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.178543   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.677807   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.814739   65211 kubeadm.go:1107] duration metric: took 13.380493852s to wait for elevateKubeSystemPrivileges
	W0318 22:03:52.814773   65211 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:03:52.814782   65211 kubeadm.go:393] duration metric: took 5m15.94869953s to StartCluster
	I0318 22:03:52.814803   65211 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.814883   65211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:03:52.816928   65211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.817192   65211 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:03:52.818800   65211 out.go:177] * Verifying Kubernetes components...
	I0318 22:03:52.817486   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:03:52.817499   65211 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:03:52.820175   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:03:52.818838   65211 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-141758"
	I0318 22:03:52.820277   65211 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-141758"
	W0318 22:03:52.820288   65211 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:03:52.818844   65211 addons.go:69] Setting metrics-server=true in profile "embed-certs-141758"
	I0318 22:03:52.820369   65211 addons.go:234] Setting addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:52.818848   65211 addons.go:69] Setting default-storageclass=true in profile "embed-certs-141758"
	I0318 22:03:52.820429   65211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-141758"
	I0318 22:03:52.820317   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	W0318 22:03:52.820386   65211 addons.go:243] addon metrics-server should already be in state true
	I0318 22:03:52.820697   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.820821   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820846   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.820872   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820899   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.821079   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.821107   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.839829   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0318 22:03:52.839850   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0318 22:03:52.839992   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840448   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840986   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841010   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841124   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841144   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841148   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841162   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841385   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841428   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841557   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841639   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.842001   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842043   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.842049   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842068   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.845295   65211 addons.go:234] Setting addon default-storageclass=true in "embed-certs-141758"
	W0318 22:03:52.845315   65211 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:03:52.845343   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.845692   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.845736   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.864111   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0318 22:03:52.864141   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0318 22:03:52.864614   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.864688   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.865181   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865199   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865318   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865334   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865556   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866107   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.866147   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.866343   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866630   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.868253   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.870076   65211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:03:52.871315   65211 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:52.871333   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:03:52.871352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.873922   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 22:03:52.874420   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.874924   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.874944   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.875080   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.875194   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.875254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.875346   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.875478   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.875718   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.875733   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.876060   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.876234   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.877582   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.879040   65211 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:03:50.320724   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.321791   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:54.821845   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.880124   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:03:52.880135   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:03:52.880152   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.882530   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.882957   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.882979   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.883230   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.883371   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.883507   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.883638   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.886181   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0318 22:03:52.886563   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.887043   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.887064   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.887416   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.887599   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.888998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.889490   65211 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:52.889504   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:03:52.889519   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.891985   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892380   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.892435   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.892776   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.892949   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.893066   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:53.047557   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:03:53.098470   65211 node_ready.go:35] waiting up to 6m0s for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111074   65211 node_ready.go:49] node "embed-certs-141758" has status "Ready":"True"
	I0318 22:03:53.111093   65211 node_ready.go:38] duration metric: took 12.593803ms for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111102   65211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:53.127297   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:53.167460   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:03:53.167476   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:03:53.199789   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:53.221070   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:53.233431   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:03:53.233452   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:03:53.298339   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:53.298368   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:03:53.415046   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:55.057164   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85734001s)
	I0318 22:03:55.057233   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057252   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.057590   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057601   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.057614   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057634   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057888   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057929   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.064097   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.064111   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.064376   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.064402   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.064418   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.138948   65211 pod_ready.go:92] pod "coredns-5dd5756b68-k675p" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.138968   65211 pod_ready.go:81] duration metric: took 2.011647544s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.138976   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150187   65211 pod_ready.go:92] pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.150204   65211 pod_ready.go:81] duration metric: took 11.222328ms for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150213   65211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157054   65211 pod_ready.go:92] pod "etcd-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.157073   65211 pod_ready.go:81] duration metric: took 6.853876ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157086   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.167962   65211 pod_ready.go:92] pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.167986   65211 pod_ready.go:81] duration metric: took 10.892042ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.168000   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177187   65211 pod_ready.go:92] pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.177204   65211 pod_ready.go:81] duration metric: took 9.197593ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177213   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.515883   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.294780085s)
	I0318 22:03:55.515937   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.515948   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.515952   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.100869127s)
	I0318 22:03:55.515994   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516014   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516301   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516378   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516469   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516481   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516491   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516406   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516451   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516665   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516683   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516691   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516772   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516839   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516867   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519334   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.519340   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.519355   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519365   65211 addons.go:470] Verifying addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:55.520941   65211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 22:03:55.522318   65211 addons.go:505] duration metric: took 2.704813533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 22:03:55.545590   65211 pod_ready.go:92] pod "kube-proxy-jltc7" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.545614   65211 pod_ready.go:81] duration metric: took 368.395697ms for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.545625   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932726   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.932750   65211 pod_ready.go:81] duration metric: took 387.117475ms for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932757   65211 pod_ready.go:38] duration metric: took 2.821645915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:55.932771   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:55.932815   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.969924   65211 api_server.go:72] duration metric: took 3.152691986s to wait for apiserver process to appear ...
	I0318 22:03:55.969955   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.969977   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 22:03:55.976004   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 22:03:55.977450   65211 api_server.go:141] control plane version: v1.28.4
	I0318 22:03:55.977489   65211 api_server.go:131] duration metric: took 7.525909ms to wait for apiserver health ...
	I0318 22:03:55.977499   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:56.138403   65211 system_pods.go:59] 9 kube-system pods found
	I0318 22:03:56.138429   65211 system_pods.go:61] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.138434   65211 system_pods.go:61] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.138438   65211 system_pods.go:61] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.138441   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.138444   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.138448   65211 system_pods.go:61] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.138453   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.138462   65211 system_pods.go:61] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.138519   65211 system_pods.go:61] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:03:56.138532   65211 system_pods.go:74] duration metric: took 161.01924ms to wait for pod list to return data ...
	I0318 22:03:56.138544   65211 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:03:56.331884   65211 default_sa.go:45] found service account: "default"
	I0318 22:03:56.331926   65211 default_sa.go:55] duration metric: took 193.36174ms for default service account to be created ...
	I0318 22:03:56.331937   65211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:03:56.536411   65211 system_pods.go:86] 9 kube-system pods found
	I0318 22:03:56.536443   65211 system_pods.go:89] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.536452   65211 system_pods.go:89] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.536459   65211 system_pods.go:89] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.536466   65211 system_pods.go:89] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.536472   65211 system_pods.go:89] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.536479   65211 system_pods.go:89] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.536486   65211 system_pods.go:89] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.536497   65211 system_pods.go:89] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.536507   65211 system_pods.go:89] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Running
	I0318 22:03:56.536518   65211 system_pods.go:126] duration metric: took 204.57366ms to wait for k8s-apps to be running ...
	I0318 22:03:56.536531   65211 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:03:56.536579   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:56.557315   65211 system_svc.go:56] duration metric: took 20.775851ms WaitForService to wait for kubelet
	I0318 22:03:56.557344   65211 kubeadm.go:576] duration metric: took 3.740121987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:03:56.557375   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:03:51.614216   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:51.614235   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.614239   65699 cri.go:89] found id: ""
	I0318 22:03:51.614245   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:51.614297   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.619100   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.623808   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:51.623827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:51.780027   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:51.780067   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.842134   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:51.842167   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.889769   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:51.889797   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.942502   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:51.942543   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:52.467986   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:52.468043   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:52.518980   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:52.519023   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:52.536546   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:52.536586   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:52.591854   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:52.591894   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:52.640783   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:52.640818   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:52.687934   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:52.687967   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:52.749690   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:52.749726   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:52.807019   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:52.807064   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:55.392930   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.415406   65699 api_server.go:72] duration metric: took 4m15.533409678s to wait for apiserver process to appear ...
	I0318 22:03:55.415435   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.415472   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:55.415523   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:55.474200   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:55.474227   65699 cri.go:89] found id: ""
	I0318 22:03:55.474237   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:55.474295   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.479787   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:55.479907   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:55.532114   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:55.532136   65699 cri.go:89] found id: ""
	I0318 22:03:55.532145   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:55.532202   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.537215   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:55.537270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:55.588633   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:55.588657   65699 cri.go:89] found id: ""
	I0318 22:03:55.588666   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:55.588723   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.595711   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:55.595777   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:55.646684   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:55.646704   65699 cri.go:89] found id: ""
	I0318 22:03:55.646714   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:55.646770   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.651920   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:55.651982   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:55.694948   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:55.694975   65699 cri.go:89] found id: ""
	I0318 22:03:55.694984   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:55.695035   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.700275   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:55.700343   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:55.740536   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:55.740559   65699 cri.go:89] found id: ""
	I0318 22:03:55.740568   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:55.740618   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.745384   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:55.745446   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:55.784614   65699 cri.go:89] found id: ""
	I0318 22:03:55.784645   65699 logs.go:276] 0 containers: []
	W0318 22:03:55.784657   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:55.784664   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:55.784727   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:55.827306   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:55.827334   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:55.827341   65699 cri.go:89] found id: ""
	I0318 22:03:55.827349   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:55.827404   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.832314   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.838497   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:55.838520   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:55.857285   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:55.857319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:55.984597   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:55.984629   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:56.044283   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:56.044339   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:56.100329   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:56.100363   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:56.173231   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:56.173270   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:56.221280   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:56.221310   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:56.274110   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:56.274138   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:56.332863   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:56.332891   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:56.374289   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:56.374317   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:56.423793   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:56.423827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:56.478696   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:56.478734   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:56.518600   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:56.518627   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:56.731788   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:03:56.731810   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 22:03:56.731823   65211 node_conditions.go:105] duration metric: took 174.442649ms to run NodePressure ...
	I0318 22:03:56.731835   65211 start.go:240] waiting for startup goroutines ...
	I0318 22:03:56.731845   65211 start.go:245] waiting for cluster config update ...
	I0318 22:03:56.731857   65211 start.go:254] writing updated cluster config ...
	I0318 22:03:56.732109   65211 ssh_runner.go:195] Run: rm -f paused
	I0318 22:03:56.778660   65211 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:03:56.780431   65211 out.go:177] * Done! kubectl is now configured to use "embed-certs-141758" cluster and "default" namespace by default
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:56.814631   65170 pod_ready.go:81] duration metric: took 4m0.000725499s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:56.814661   65170 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:03:56.814684   65170 pod_ready.go:38] duration metric: took 4m11.531709977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:56.814712   65170 kubeadm.go:591] duration metric: took 4m19.482098142s to restartPrimaryControlPlane
	W0318 22:03:56.814767   65170 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:56.814797   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:59.480665   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 22:03:59.485792   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 22:03:59.487343   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:03:59.487364   65699 api_server.go:131] duration metric: took 4.071921663s to wait for apiserver health ...
	I0318 22:03:59.487375   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:59.487406   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:59.487462   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:59.540845   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:59.540872   65699 cri.go:89] found id: ""
	I0318 22:03:59.540881   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:59.540958   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.547759   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:59.547824   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:59.593015   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:59.593042   65699 cri.go:89] found id: ""
	I0318 22:03:59.593051   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:59.593106   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.598169   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:59.598233   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:59.638484   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:59.638508   65699 cri.go:89] found id: ""
	I0318 22:03:59.638517   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:59.638575   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.643353   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:59.643416   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:59.687190   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:59.687208   65699 cri.go:89] found id: ""
	I0318 22:03:59.687216   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:59.687271   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.692481   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:59.692550   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:59.735798   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:59.735824   65699 cri.go:89] found id: ""
	I0318 22:03:59.735834   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:59.735893   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.742192   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:59.742263   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:59.782961   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:59.782989   65699 cri.go:89] found id: ""
	I0318 22:03:59.783000   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:59.783060   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.788247   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:59.788325   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:59.836955   65699 cri.go:89] found id: ""
	I0318 22:03:59.836983   65699 logs.go:276] 0 containers: []
	W0318 22:03:59.836992   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:59.836998   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:59.837052   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:59.879225   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:59.879250   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:59.879255   65699 cri.go:89] found id: ""
	I0318 22:03:59.879264   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:59.879323   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.884380   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.889289   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:59.889316   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:04:00.307344   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:04:00.307389   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:04:00.325472   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:04:00.325496   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:04:00.388254   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:04:00.388288   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:04:00.430203   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:04:00.430241   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:04:00.476834   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:04:00.476861   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:04:00.532672   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:04:00.532703   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:04:00.572174   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:04:00.572202   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:04:00.624250   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:04:00.624283   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:04:00.688520   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:04:00.688551   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:04:00.764279   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:04:00.764319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:04:00.903231   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:04:00.903262   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:04:00.974836   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:04:00.974869   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:04:03.547135   65699 system_pods.go:59] 8 kube-system pods found
	I0318 22:04:03.547166   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.547172   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.547180   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.547186   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.547193   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.547198   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.547208   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.547214   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.547224   65699 system_pods.go:74] duration metric: took 4.059842092s to wait for pod list to return data ...
	I0318 22:04:03.547233   65699 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:03.554656   65699 default_sa.go:45] found service account: "default"
	I0318 22:04:03.554682   65699 default_sa.go:55] duration metric: took 7.437557ms for default service account to be created ...
	I0318 22:04:03.554692   65699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:03.562342   65699 system_pods.go:86] 8 kube-system pods found
	I0318 22:04:03.562369   65699 system_pods.go:89] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.562374   65699 system_pods.go:89] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.562378   65699 system_pods.go:89] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.562383   65699 system_pods.go:89] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.562387   65699 system_pods.go:89] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.562391   65699 system_pods.go:89] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.562397   65699 system_pods.go:89] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.562402   65699 system_pods.go:89] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.562410   65699 system_pods.go:126] duration metric: took 7.712357ms to wait for k8s-apps to be running ...
	I0318 22:04:03.562424   65699 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:03.562470   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:03.579949   65699 system_svc.go:56] duration metric: took 17.517801ms WaitForService to wait for kubelet
	I0318 22:04:03.579977   65699 kubeadm.go:576] duration metric: took 4m23.697982351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:03.579993   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:03.585009   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:03.585037   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:03.585049   65699 node_conditions.go:105] duration metric: took 5.050614ms to run NodePressure ...
	I0318 22:04:03.585063   65699 start.go:240] waiting for startup goroutines ...
	I0318 22:04:03.585075   65699 start.go:245] waiting for cluster config update ...
	I0318 22:04:03.585089   65699 start.go:254] writing updated cluster config ...
	I0318 22:04:03.585426   65699 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:03.634969   65699 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:04:03.637561   65699 out.go:177] * Done! kubectl is now configured to use "no-preload-963041" cluster and "default" namespace by default
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:29.143869   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.329052492s)
	I0318 22:04:29.143935   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:29.161708   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:04:29.173738   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:04:29.185221   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:04:29.185241   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 22:04:29.185273   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 22:04:29.196326   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:04:29.196382   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:04:29.207305   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 22:04:29.217759   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:04:29.217811   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:04:29.228350   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.239148   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:04:29.239191   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.251191   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 22:04:29.262291   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:04:29.262339   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:04:29.273343   65170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:04:29.332561   65170 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:04:29.333329   65170 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:04:29.496432   65170 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:04:29.496558   65170 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:04:29.496720   65170 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:04:29.728202   65170 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:04:29.730047   65170 out.go:204]   - Generating certificates and keys ...
	I0318 22:04:29.730126   65170 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:04:29.730202   65170 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:04:29.730297   65170 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:04:29.730669   65170 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:04:29.731209   65170 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:04:29.731887   65170 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:04:29.732569   65170 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:04:29.733362   65170 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:04:29.734045   65170 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:04:29.734477   65170 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:04:29.735264   65170 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:04:29.735340   65170 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:04:30.122363   65170 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:04:30.296021   65170 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:04:30.555774   65170 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:04:30.674403   65170 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:04:30.674943   65170 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:04:30.677509   65170 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:04:30.679219   65170 out.go:204]   - Booting up control plane ...
	I0318 22:04:30.679319   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:04:30.679402   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:04:30.681975   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:04:30.701015   65170 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:04:30.701902   65170 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:04:30.702104   65170 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:04:30.843019   65170 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:04:36.846312   65170 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002976 seconds
	I0318 22:04:36.846520   65170 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:04:36.870892   65170 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:04:37.410373   65170 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:04:37.410649   65170 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-660775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:04:37.935730   65170 kubeadm.go:309] [bootstrap-token] Using token: jwgiie.tp4r5ug6emevtbxj
	I0318 22:04:37.937024   65170 out.go:204]   - Configuring RBAC rules ...
	I0318 22:04:37.937156   65170 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:04:37.943204   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:04:37.951400   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:04:37.958005   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:04:37.962013   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:04:37.965783   65170 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:04:37.985150   65170 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:04:38.241561   65170 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:04:38.355495   65170 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:04:38.356452   65170 kubeadm.go:309] 
	I0318 22:04:38.356511   65170 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:04:38.356520   65170 kubeadm.go:309] 
	I0318 22:04:38.356598   65170 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:04:38.356609   65170 kubeadm.go:309] 
	I0318 22:04:38.356667   65170 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:04:38.356774   65170 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:04:38.356828   65170 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:04:38.356844   65170 kubeadm.go:309] 
	I0318 22:04:38.356898   65170 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:04:38.356916   65170 kubeadm.go:309] 
	I0318 22:04:38.356976   65170 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:04:38.356984   65170 kubeadm.go:309] 
	I0318 22:04:38.357030   65170 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:04:38.357093   65170 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:04:38.357161   65170 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:04:38.357168   65170 kubeadm.go:309] 
	I0318 22:04:38.357263   65170 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:04:38.357364   65170 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:04:38.357376   65170 kubeadm.go:309] 
	I0318 22:04:38.357495   65170 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.357657   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:04:38.357707   65170 kubeadm.go:309] 	--control-plane 
	I0318 22:04:38.357724   65170 kubeadm.go:309] 
	I0318 22:04:38.357861   65170 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:04:38.357873   65170 kubeadm.go:309] 
	I0318 22:04:38.357986   65170 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.358144   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:04:38.358726   65170 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:38.358772   65170 cni.go:84] Creating CNI manager for ""
	I0318 22:04:38.358789   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:04:38.360246   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:04:38.361264   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:04:38.378420   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:04:38.482111   65170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:04:38.482178   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:38.482194   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-660775 minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=default-k8s-diff-port-660775 minikube.k8s.io/primary=true
	I0318 22:04:38.617420   65170 ops.go:34] apiserver oom_adj: -16
	I0318 22:04:38.828087   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.328292   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.828411   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.328829   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.828338   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.329118   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.828239   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.328296   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.828241   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.329151   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.829036   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.328224   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.828465   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.328632   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.828289   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.328321   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.828493   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.329008   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.828789   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.328727   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.829024   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.329010   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.828311   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.328474   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.445593   65170 kubeadm.go:1107] duration metric: took 11.963480655s to wait for elevateKubeSystemPrivileges
	W0318 22:04:50.445640   65170 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:04:50.445651   65170 kubeadm.go:393] duration metric: took 5m13.168616417s to StartCluster
	I0318 22:04:50.445672   65170 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.445754   65170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:04:50.447789   65170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.448086   65170 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:04:50.449989   65170 out.go:177] * Verifying Kubernetes components...
	I0318 22:04:50.448238   65170 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:04:50.450030   65170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450044   65170 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450068   65170 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-660775"
	I0318 22:04:50.450070   65170 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.450078   65170 addons.go:243] addon storage-provisioner should already be in state true
	W0318 22:04:50.450082   65170 addons.go:243] addon metrics-server should already be in state true
	I0318 22:04:50.450105   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450116   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450033   65170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450181   65170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-660775"
	I0318 22:04:50.450493   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450516   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450585   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450628   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.448310   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:04:50.452465   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:04:50.466764   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0318 22:04:50.468214   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0318 22:04:50.468460   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.468676   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469019   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469038   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469182   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469195   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469254   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0318 22:04:50.469549   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.469605   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469603   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470035   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.470053   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.470320   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470350   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470381   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470385   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470395   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470535   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.473854   65170 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.473879   65170 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:04:50.473907   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.474268   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.474301   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.485707   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0318 22:04:50.486097   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0318 22:04:50.486278   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486675   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486809   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.486818   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487074   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.487086   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487345   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487513   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487561   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.487759   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.489284   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.491084   65170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:04:50.489730   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.492156   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0318 22:04:50.492539   65170 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.492549   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:04:50.492563   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.494057   65170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:04:50.492998   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.495232   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:04:50.495253   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:04:50.495275   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.495863   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.495887   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.495952   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.496340   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496476   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.496620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.496757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.496861   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.497350   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.498004   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.498047   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.498450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.499027   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499235   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.499406   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.499565   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.499691   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.515126   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0318 22:04:50.515913   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.516473   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.516498   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.516800   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.517008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.518559   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.518811   65170 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.518825   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:04:50.518842   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.522625   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523156   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.523537   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523810   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.523984   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.524193   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.524430   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.682066   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:04:50.699269   65170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709309   65170 node_ready.go:49] node "default-k8s-diff-port-660775" has status "Ready":"True"
	I0318 22:04:50.709330   65170 node_ready.go:38] duration metric: took 10.026001ms for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709342   65170 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.713958   65170 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720434   65170 pod_ready.go:92] pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.720459   65170 pod_ready.go:81] duration metric: took 6.477329ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720471   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725799   65170 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.725820   65170 pod_ready.go:81] duration metric: took 5.341405ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725829   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.730987   65170 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.731006   65170 pod_ready.go:81] duration metric: took 5.171376ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.731016   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737458   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.737481   65170 pod_ready.go:81] duration metric: took 6.458242ms for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737490   65170 pod_ready.go:38] duration metric: took 28.137606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.737506   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:04:50.737560   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:04:50.757770   65170 api_server.go:72] duration metric: took 309.622189ms to wait for apiserver process to appear ...
	I0318 22:04:50.757795   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:04:50.757815   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 22:04:50.765732   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 22:04:50.769202   65170 api_server.go:141] control plane version: v1.28.4
	I0318 22:04:50.769228   65170 api_server.go:131] duration metric: took 11.424563ms to wait for apiserver health ...
	I0318 22:04:50.769238   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:04:50.831223   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.859994   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:04:50.860014   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:04:50.864994   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.905212   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:04:50.905257   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:04:50.918389   65170 system_pods.go:59] 4 kube-system pods found
	I0318 22:04:50.918416   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:50.918422   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:50.918426   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:50.918429   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:50.918435   65170 system_pods.go:74] duration metric: took 149.190745ms to wait for pod list to return data ...
	I0318 22:04:50.918442   65170 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:50.993150   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:50.993174   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:04:51.056974   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:51.124585   65170 default_sa.go:45] found service account: "default"
	I0318 22:04:51.124612   65170 default_sa.go:55] duration metric: took 206.163161ms for default service account to be created ...
	I0318 22:04:51.124624   65170 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:51.347373   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.347408   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347419   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347426   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.347433   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.347440   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.347452   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.347458   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.347478   65170 retry.go:31] will retry after 201.830143ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.556559   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.556594   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556605   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556621   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.556630   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.556638   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.556648   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.556663   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.556681   65170 retry.go:31] will retry after 312.139871ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.878515   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.878546   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878554   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878562   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.878568   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.878573   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.878579   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.878582   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.878596   65170 retry.go:31] will retry after 379.864885ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.364944   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.364971   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364979   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364987   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.364995   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.365002   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.365011   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:52.365018   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.365039   65170 retry.go:31] will retry after 598.040475ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.752856   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921596456s)
	I0318 22:04:52.752915   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.752928   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753278   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753303   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.753314   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.753323   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753565   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753580   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.781081   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.781102   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.781396   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.781417   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.973228   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.973256   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Running
	I0318 22:04:52.973262   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Running
	I0318 22:04:52.973269   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.973275   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.973282   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.973289   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Running
	I0318 22:04:52.973295   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.973304   65170 system_pods.go:126] duration metric: took 1.848673952s to wait for k8s-apps to be running ...
	I0318 22:04:52.973310   65170 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:52.973361   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:53.343164   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.286142485s)
	I0318 22:04:53.343193   65170 system_svc.go:56] duration metric: took 369.874916ms WaitForService to wait for kubelet
	I0318 22:04:53.343215   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343229   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343216   65170 kubeadm.go:576] duration metric: took 2.89507195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:53.343238   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:53.343265   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478242665s)
	I0318 22:04:53.343301   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343311   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343510   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.343555   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.343564   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.343572   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343580   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345065   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345078   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345082   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345065   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345094   65170 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-660775"
	I0318 22:04:53.345094   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345117   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345127   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.345136   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345401   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345400   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345419   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.347668   65170 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0318 22:04:53.348839   65170 addons.go:505] duration metric: took 2.900603006s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0318 22:04:53.363245   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:53.363274   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:53.363307   65170 node_conditions.go:105] duration metric: took 20.053581ms to run NodePressure ...
	I0318 22:04:53.363325   65170 start.go:240] waiting for startup goroutines ...
	I0318 22:04:53.363339   65170 start.go:245] waiting for cluster config update ...
	I0318 22:04:53.363353   65170 start.go:254] writing updated cluster config ...
	I0318 22:04:53.363674   65170 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:53.429018   65170 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:04:53.430584   65170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-660775" cluster and "default" namespace by default
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.704542581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799985704476914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff056608-b345-4ae8-ab95-c5c0f6b0fcf5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.705615979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ee97d6-748a-4f3b-a299-c80b20ad0b51 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.705671499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ee97d6-748a-4f3b-a299-c80b20ad0b51 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.705861507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ee97d6-748a-4f3b-a299-c80b20ad0b51 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.753395961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0bd189c-6bde-44c6-82f2-29df44ca7b3f name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.753510686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0bd189c-6bde-44c6-82f2-29df44ca7b3f name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.754500926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0d15670-ae98-40cd-8193-d6e39238876b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.754959778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799985754937057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0d15670-ae98-40cd-8193-d6e39238876b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.755594342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8f127c8-5de1-4d3e-9c33-a9d47eb68e7c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.755700482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8f127c8-5de1-4d3e-9c33-a9d47eb68e7c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.756033526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8f127c8-5de1-4d3e-9c33-a9d47eb68e7c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.798362228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ab0fc0e-3a3c-45a8-b88a-a6c6c96b76fc name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.798444689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ab0fc0e-3a3c-45a8-b88a-a6c6c96b76fc name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.799606869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cbfe774-a4e6-4606-9509-8f4d7a71a6fc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.799931701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799985799913835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cbfe774-a4e6-4606-9509-8f4d7a71a6fc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.800778992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0493075e-15d9-446b-95be-a6d67eb0b004 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.800832883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0493075e-15d9-446b-95be-a6d67eb0b004 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.801418045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0493075e-15d9-446b-95be-a6d67eb0b004 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.847871519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6880a56-cfd2-4700-8da7-9a8720ac1f57 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.847941255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6880a56-cfd2-4700-8da7-9a8720ac1f57 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.849804425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed083499-b688-4ba0-a025-f724e151327e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.850743841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710799985850716168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed083499-b688-4ba0-a025-f724e151327e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.851804275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1111dd4-3dbd-428d-a130-e0a6a1760f61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.851876695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1111dd4-3dbd-428d-a130-e0a6a1760f61 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:05 no-preload-963041 crio[697]: time="2024-03-18 22:13:05.852184560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1111dd4-3dbd-428d-a130-e0a6a1760f61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9559a9b3fa160       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   956268acc0e56       storage-provisioner
	314b6b17b16f2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   af474d8de5559       busybox
	95d95025af787       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   345b2c562f629       coredns-76f75df574-6mtzp
	757a8fc5ae06d       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   127a6274170c0       kube-proxy-kkrzx
	761bc0d14f31e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   956268acc0e56       storage-provisioner
	4896452ff8ddb       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   cc3be690b0316       kube-scheduler-no-preload-963041
	d27b0e98d5f67       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   c3268ad6d80ba       etcd-no-preload-963041
	6b309d737fd2f       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   33d230c469ced       kube-controller-manager-no-preload-963041
	d723ad24bd61e       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   59ae21676f540       kube-apiserver-no-preload-963041
	
	
	==> coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33148 - 53974 "HINFO IN 4416264748007856954.6098944003411770047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014649442s
	
	
	==> describe nodes <==
	Name:               no-preload-963041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-963041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=no-preload-963041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T21_50_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:50:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-963041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:10:18 +0000   Mon, 18 Mar 2024 21:50:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:10:18 +0000   Mon, 18 Mar 2024 21:50:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:10:18 +0000   Mon, 18 Mar 2024 21:50:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:10:18 +0000   Mon, 18 Mar 2024 21:59:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.84
	  Hostname:    no-preload-963041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b568cf6d942140899f719c07fa284928
	  System UUID:                b568cf6d-9421-4089-9f71-9c07fa284928
	  Boot ID:                    2c801869-f97e-42b2-8386-4a51a6feb5cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-6mtzp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-963041                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-963041             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-963041    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-kkrzx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-963041             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-rdthh              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-963041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-963041 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-963041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-963041 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-963041 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-963041 event: Registered Node no-preload-963041 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-963041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-963041 event: Registered Node no-preload-963041 in Controller
	
	
	==> dmesg <==
	[Mar18 21:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052884] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044087] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.919726] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Mar18 21:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.715162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.117415] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.063638] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068187] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.210520] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.143900] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.351645] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[ +17.979238] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.060092] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.362133] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +5.656136] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.916325] systemd-fstab-generator[1932]: Ignoring "noauto" option for root device
	[  +1.768167] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.171941] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] <==
	{"level":"warn","ts":"2024-03-18T21:59:36.663813Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T21:59:35.908468Z","time spent":"755.337236ms","remote":"127.0.0.1:54552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-18T21:59:37.040142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.172189ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:volume-scheduler\" ","response":"range_response_count:1 size:715"}
	{"level":"info","ts":"2024-03-18T21:59:37.040311Z","caller":"traceutil/trace.go:171","msg":"trace[1931595085] range","detail":"{range_begin:/registry/clusterrolebindings/system:volume-scheduler; range_end:; response_count:1; response_revision:538; }","duration":"280.362273ms","start":"2024-03-18T21:59:36.759935Z","end":"2024-03-18T21:59:37.040297Z","steps":["trace[1931595085] 'range keys from in-memory index tree'  (duration: 280.009366ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T21:59:37.961878Z","caller":"traceutil/trace.go:171","msg":"trace[104491370] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"533.336166ms","start":"2024-03-18T21:59:37.428526Z","end":"2024-03-18T21:59:37.961863Z","steps":["trace[104491370] 'process raft request'  (duration: 519.317667ms)","trace[104491370] 'compare'  (duration: 13.922042ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T21:59:37.962068Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T21:59:37.428509Z","time spent":"533.45922ms","remote":"127.0.0.1:54594","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.72.84\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.72.84\" value_size:66 lease:7690615803299887647 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.84\" > >"}
	{"level":"info","ts":"2024-03-18T21:59:38.032491Z","caller":"traceutil/trace.go:171","msg":"trace[1954620342] linearizableReadLoop","detail":"{readStateIndex:570; appliedIndex:568; }","duration":"376.518475ms","start":"2024-03-18T21:59:37.65595Z","end":"2024-03-18T21:59:38.032468Z","steps":["trace[1954620342] 'read index received'  (duration: 292.529813ms)","trace[1954620342] 'applied index is now lower than readState.Index'  (duration: 83.98787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T21:59:38.032511Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T21:59:37.519865Z","time spent":"512.64214ms","remote":"127.0.0.1:54636","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-03-18T21:59:38.032825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.8914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41054"}
	{"level":"info","ts":"2024-03-18T21:59:38.032859Z","caller":"traceutil/trace.go:171","msg":"trace[1455292273] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:539; }","duration":"376.933973ms","start":"2024-03-18T21:59:37.655913Z","end":"2024-03-18T21:59:38.032847Z","steps":["trace[1455292273] 'agreement among raft nodes before linearized reading'  (duration: 376.695368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.032881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T21:59:37.655896Z","time spent":"376.980085ms","remote":"127.0.0.1:54768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":8,"response size":41078,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-03-18T21:59:38.034023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.945841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-963041\" ","response":"range_response_count:1 size:4604"}
	{"level":"info","ts":"2024-03-18T21:59:38.034088Z","caller":"traceutil/trace.go:171","msg":"trace[1350820805] range","detail":"{range_begin:/registry/minions/no-preload-963041; range_end:; response_count:1; response_revision:539; }","duration":"126.006905ms","start":"2024-03-18T21:59:37.908065Z","end":"2024-03-18T21:59:38.034072Z","steps":["trace[1350820805] 'agreement among raft nodes before linearized reading'  (duration: 125.916989ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.034345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.935211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-03-18T21:59:38.034397Z","caller":"traceutil/trace.go:171","msg":"trace[1815315574] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:539; }","duration":"187.987435ms","start":"2024-03-18T21:59:37.8464Z","end":"2024-03-18T21:59:38.034388Z","steps":["trace[1815315574] 'agreement among raft nodes before linearized reading'  (duration: 186.388303ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T21:59:38.21866Z","caller":"traceutil/trace.go:171","msg":"trace[964714991] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"182.039862ms","start":"2024-03-18T21:59:38.036603Z","end":"2024-03-18T21:59:38.218643Z","steps":["trace[964714991] 'process raft request'  (duration: 126.682506ms)","trace[964714991] 'compare'  (duration: 54.862272ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T21:59:38.218779Z","caller":"traceutil/trace.go:171","msg":"trace[418374600] linearizableReadLoop","detail":"{readStateIndex:571; appliedIndex:570; }","duration":"179.394561ms","start":"2024-03-18T21:59:38.039376Z","end":"2024-03-18T21:59:38.218771Z","steps":["trace[418374600] 'read index received'  (duration: 123.917935ms)","trace[418374600] 'applied index is now lower than readState.Index'  (duration: 55.475162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T21:59:38.21888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.505213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:480"}
	{"level":"info","ts":"2024-03-18T21:59:38.21953Z","caller":"traceutil/trace.go:171","msg":"trace[829889812] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:540; }","duration":"180.148682ms","start":"2024-03-18T21:59:38.039357Z","end":"2024-03-18T21:59:38.219505Z","steps":["trace[829889812] 'agreement among raft nodes before linearized reading'  (duration: 179.436188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.219837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.356808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-03-18T21:59:38.219956Z","caller":"traceutil/trace.go:171","msg":"trace[325180291] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:540; }","duration":"180.472128ms","start":"2024-03-18T21:59:38.039464Z","end":"2024-03-18T21:59:38.219937Z","steps":["trace[325180291] 'agreement among raft nodes before linearized reading'  (duration: 180.294046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.220202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.444375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4604"}
	{"level":"info","ts":"2024-03-18T21:59:38.220375Z","caller":"traceutil/trace.go:171","msg":"trace[1760800734] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:540; }","duration":"171.618463ms","start":"2024-03-18T21:59:38.048749Z","end":"2024-03-18T21:59:38.220368Z","steps":["trace[1760800734] 'agreement among raft nodes before linearized reading'  (duration: 171.42094ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:09:33.146526Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":881}
	{"level":"info","ts":"2024-03-18T22:09:33.160914Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":881,"took":"13.250651ms","hash":2364969846}
	{"level":"info","ts":"2024-03-18T22:09:33.161016Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2364969846,"revision":881,"compact-revision":-1}
	
	
	==> kernel <==
	 22:13:06 up 14 min,  0 users,  load average: 0.45, 0.31, 0.20
	Linux no-preload-963041 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] <==
	I0318 22:07:35.813798       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:09:34.818067       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:34.818165       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 22:09:35.818677       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:35.818746       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:09:35.818756       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:09:35.818885       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:35.818977       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:09:35.820297       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:10:35.819649       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:10:35.819877       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:10:35.819904       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:10:35.821170       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:10:35.821370       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:10:35.821418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:12:35.820729       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:12:35.820824       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:12:35.820834       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:12:35.821833       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:12:35.821918       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:12:35.821929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] <==
	I0318 22:07:19.372494       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:07:48.905565       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:07:49.381956       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:08:18.910813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:08:19.390199       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:08:48.916417       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:08:49.398167       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:09:18.923401       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:09:19.408897       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:09:48.929585       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:09:49.417974       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:10:18.937146       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:10:19.430018       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:10:48.942383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:10:49.438086       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 22:10:49.703912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="293.468µs"
	I0318 22:11:01.701101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="129.113µs"
	E0318 22:11:18.947976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:11:19.447127       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:11:48.954107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:11:49.456444       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:12:18.959662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:12:19.464923       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:12:48.965899       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:12:49.473339       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] <==
	I0318 21:59:37.901165       1 server_others.go:72] "Using iptables proxy"
	I0318 21:59:38.042943       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.84"]
	I0318 21:59:38.088295       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 21:59:38.088390       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:59:38.088417       1 server_others.go:168] "Using iptables Proxier"
	I0318 21:59:38.092102       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:59:38.092382       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 21:59:38.092429       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:59:38.093422       1 config.go:188] "Starting service config controller"
	I0318 21:59:38.093487       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:59:38.093521       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:59:38.093538       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:59:38.093986       1 config.go:315] "Starting node config controller"
	I0318 21:59:38.094984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:59:38.194111       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 21:59:38.194168       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:59:38.195597       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] <==
	I0318 21:59:32.769772       1 serving.go:380] Generated self-signed cert in-memory
	W0318 21:59:34.704722       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 21:59:34.704844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 21:59:34.705019       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 21:59:34.705179       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 21:59:34.825829       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0318 21:59:34.825884       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:59:34.832922       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 21:59:34.833121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 21:59:34.833138       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:59:34.833159       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 21:59:34.934139       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:10:38 no-preload-963041 kubelet[1323]: E0318 22:10:38.702901    1323 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 18 22:10:38 no-preload-963041 kubelet[1323]: E0318 22:10:38.703462    1323 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 18 22:10:38 no-preload-963041 kubelet[1323]: E0318 22:10:38.703742    1323 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ftr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rdthh_kube-system(50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 18 22:10:38 no-preload-963041 kubelet[1323]: E0318 22:10:38.704121    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:10:49 no-preload-963041 kubelet[1323]: E0318 22:10:49.685789    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:11:01 no-preload-963041 kubelet[1323]: E0318 22:11:01.685351    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:11:14 no-preload-963041 kubelet[1323]: E0318 22:11:14.687975    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:11:27 no-preload-963041 kubelet[1323]: E0318 22:11:27.685488    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:11:30 no-preload-963041 kubelet[1323]: E0318 22:11:30.714198    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:11:30 no-preload-963041 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:11:30 no-preload-963041 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:11:30 no-preload-963041 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:11:30 no-preload-963041 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:11:39 no-preload-963041 kubelet[1323]: E0318 22:11:39.685282    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:11:51 no-preload-963041 kubelet[1323]: E0318 22:11:51.684574    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:12:02 no-preload-963041 kubelet[1323]: E0318 22:12:02.685770    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:12:16 no-preload-963041 kubelet[1323]: E0318 22:12:16.685952    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:12:30 no-preload-963041 kubelet[1323]: E0318 22:12:30.715943    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:12:30 no-preload-963041 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:12:30 no-preload-963041 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:12:30 no-preload-963041 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:12:30 no-preload-963041 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:12:31 no-preload-963041 kubelet[1323]: E0318 22:12:31.685724    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:12:44 no-preload-963041 kubelet[1323]: E0318 22:12:44.687720    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:12:58 no-preload-963041 kubelet[1323]: E0318 22:12:58.687582    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	
	
	==> storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] <==
	I0318 21:59:37.331523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0318 22:00:07.336457       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] <==
	I0318 22:00:08.076679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:00:08.084989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:00:08.085290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:00:25.492806       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:00:25.492954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-963041_0a21f7a5-74e1-4a26-bb19-9b7a82763866!
	I0318 22:00:25.493686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"505589fa-92f7-4d66-9fcc-93d0329ea57e", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-963041_0a21f7a5-74e1-4a26-bb19-9b7a82763866 became leader
	I0318 22:00:25.593905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-963041_0a21f7a5-74e1-4a26-bb19-9b7a82763866!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963041 -n no-preload-963041
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-963041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rdthh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-963041 describe pod metrics-server-57f55c9bc5-rdthh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-963041 describe pod metrics-server-57f55c9bc5-rdthh: exit status 1 (66.208164ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rdthh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-963041 describe pod metrics-server-57f55c9bc5-rdthh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0318 22:05:14.158190   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 22:05:23.236983   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 22:05:30.985611   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 22:05:38.567560   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 22:06:51.607645   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 22:13:54.090046372 +0000 UTC m=+6290.991834811
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-660775 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-660775 logs -n 25: (2.136607248s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo cat                              | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:54:36.607114   65699 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:36.607254   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607266   65699 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:36.607272   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607706   65699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:36.608596   65699 out.go:298] Setting JSON to false
	I0318 21:54:36.609468   65699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5821,"bootTime":1710793056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:36.609529   65699 start.go:139] virtualization: kvm guest
	I0318 21:54:36.611401   65699 out.go:177] * [no-preload-963041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:36.612703   65699 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:36.612704   65699 notify.go:220] Checking for updates...
	I0318 21:54:36.613976   65699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:36.615157   65699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:36.616283   65699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:36.617431   65699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:36.618615   65699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:36.620094   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:54:36.620490   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.620537   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.634914   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0318 21:54:36.635251   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.635706   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.635728   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.636019   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.636173   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.636411   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:36.636719   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.636756   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.650608   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0318 21:54:36.650946   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.651358   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.651383   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.651694   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.651832   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.682407   65699 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:36.683826   65699 start.go:297] selected driver: kvm2
	I0318 21:54:36.683837   65699 start.go:901] validating driver "kvm2" against &{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.683941   65699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:36.684624   65699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.684696   65699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:36.699415   65699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:36.699766   65699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:36.699827   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:54:36.699840   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:36.699883   65699 start.go:340] cluster config:
	{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.699984   65699 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.701584   65699 out.go:177] * Starting "no-preload-963041" primary control-plane node in "no-preload-963041" cluster
	I0318 21:54:36.702792   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:54:36.702911   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:54:36.703027   65699 cache.go:107] acquiring lock: {Name:mk20bcc8d34b80cc44c1e33bc5e0ec5cd82ba46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703044   65699 cache.go:107] acquiring lock: {Name:mk299438a86024ea6c96280d8bbe30c1283fa996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703087   65699 cache.go:107] acquiring lock: {Name:mkf5facbc69c16807f75e75a80a4afa3f97a0ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703124   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0318 21:54:36.703127   65699 start.go:360] acquireMachinesLock for no-preload-963041: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:54:36.703141   65699 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 102.209µs
	I0318 21:54:36.703156   65699 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0318 21:54:36.703104   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 21:54:36.703174   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 21:54:36.703172   65699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.262µs
	I0318 21:54:36.703190   65699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703043   65699 cache.go:107] acquiring lock: {Name:mk4c82b4e60b551671fa99921294b8e1f551d382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703189   65699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 104.037µs
	I0318 21:54:36.703209   65699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 21:54:36.703137   65699 cache.go:107] acquiring lock: {Name:mk847ac7ddb8863389782289e61001579ff6ec5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703204   65699 cache.go:107] acquiring lock: {Name:mk1bf8cc3e30a7cf88f25697f1021501ea6ee4ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703243   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 21:54:36.703254   65699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 163.57µs
	I0318 21:54:36.703233   65699 cache.go:107] acquiring lock: {Name:mkf9c9b33c4d1ca54e3364ad39dcd3b10bc50534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703265   65699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703224   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 21:54:36.703282   65699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 247.672µs
	I0318 21:54:36.703293   65699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 21:54:36.703293   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 21:54:36.703293   65699 cache.go:107] acquiring lock: {Name:mkd0bd00e6f69df37097a8ce792bcc8844efbc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703315   65699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.33µs
	I0318 21:54:36.703329   65699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 21:54:36.703363   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 21:54:36.703385   65699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 207.404µs
	I0318 21:54:36.703400   65699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703411   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 21:54:36.703419   65699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 164.5µs
	I0318 21:54:36.703435   65699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703447   65699 cache.go:87] Successfully saved all images to host disk.
	I0318 21:54:40.421098   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:43.493261   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:49.573105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:52.645158   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:58.725124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:01.797077   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:07.877116   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:10.949096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:17.029117   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:20.101131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:26.181141   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:29.253113   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:35.333097   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:38.405132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:44.485208   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:47.557123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:53.637185   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:56.709102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:02.789134   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:05.861146   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:11.941102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:15.013092   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:21.093132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:24.165129   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:30.245127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:33.317151   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:39.397126   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:42.469163   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:48.549145   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:51.621085   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:57.701118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:00.773108   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:06.853105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:09.925096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:16.005131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:19.077111   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:25.157130   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:28.229107   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:34.309152   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:37.381127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:43.461123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:46.533127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:52.613124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:55.685135   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:01.765118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:04.837197   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:07.840986   65211 start.go:364] duration metric: took 4m36.169318619s to acquireMachinesLock for "embed-certs-141758"
	I0318 21:58:07.841046   65211 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:07.841054   65211 fix.go:54] fixHost starting: 
	I0318 21:58:07.841507   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:07.841544   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:07.856544   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0318 21:58:07.856976   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:07.857424   65211 main.go:141] libmachine: Using API Version  1
	I0318 21:58:07.857452   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:07.857783   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:07.857971   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:07.858126   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:58:07.859909   65211 fix.go:112] recreateIfNeeded on embed-certs-141758: state=Stopped err=<nil>
	I0318 21:58:07.859947   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	W0318 21:58:07.860120   65211 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:07.862134   65211 out.go:177] * Restarting existing kvm2 VM for "embed-certs-141758" ...
	I0318 21:58:07.838706   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:07.838746   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839036   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:58:07.839060   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839263   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:58:07.840867   65170 machine.go:97] duration metric: took 4m37.426711052s to provisionDockerMachine
	I0318 21:58:07.840915   65170 fix.go:56] duration metric: took 4m37.446713188s for fixHost
	I0318 21:58:07.840923   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 4m37.446748943s
	W0318 21:58:07.840945   65170 start.go:713] error starting host: provision: host is not running
	W0318 21:58:07.841017   65170 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 21:58:07.841026   65170 start.go:728] Will try again in 5 seconds ...
	I0318 21:58:07.863352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Start
	I0318 21:58:07.863483   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring networks are active...
	I0318 21:58:07.864202   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network default is active
	I0318 21:58:07.864652   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network mk-embed-certs-141758 is active
	I0318 21:58:07.865077   65211 main.go:141] libmachine: (embed-certs-141758) Getting domain xml...
	I0318 21:58:07.865858   65211 main.go:141] libmachine: (embed-certs-141758) Creating domain...
	I0318 21:58:09.026367   65211 main.go:141] libmachine: (embed-certs-141758) Waiting to get IP...
	I0318 21:58:09.027144   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.027524   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.027580   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.027503   66223 retry.go:31] will retry after 260.499882ms: waiting for machine to come up
	I0318 21:58:09.289935   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.290490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.290522   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.290450   66223 retry.go:31] will retry after 328.000758ms: waiting for machine to come up
	I0318 21:58:09.619947   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.620337   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.620384   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.620305   66223 retry.go:31] will retry after 419.640035ms: waiting for machine to come up
	I0318 21:58:10.041775   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.042186   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.042213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.042134   66223 retry.go:31] will retry after 482.732439ms: waiting for machine to come up
	I0318 21:58:10.526892   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.527282   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.527307   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.527253   66223 retry.go:31] will retry after 718.696645ms: waiting for machine to come up
	I0318 21:58:11.247165   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.247545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.247571   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.247501   66223 retry.go:31] will retry after 603.951593ms: waiting for machine to come up
	I0318 21:58:12.842928   65170 start.go:360] acquireMachinesLock for default-k8s-diff-port-660775: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:11.853119   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.853408   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.853438   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.853362   66223 retry.go:31] will retry after 1.191963995s: waiting for machine to come up
	I0318 21:58:13.046915   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:13.047289   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:13.047319   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:13.047237   66223 retry.go:31] will retry after 1.314666633s: waiting for machine to come up
	I0318 21:58:14.363693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:14.364109   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:14.364135   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:14.364064   66223 retry.go:31] will retry after 1.341191632s: waiting for machine to come up
	I0318 21:58:15.707425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:15.707921   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:15.707951   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:15.707862   66223 retry.go:31] will retry after 1.887572842s: waiting for machine to come up
	I0318 21:58:17.596545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:17.596970   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:17.597002   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:17.596899   66223 retry.go:31] will retry after 2.820006704s: waiting for machine to come up
	I0318 21:58:20.420327   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:20.420693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:20.420714   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:20.420659   66223 retry.go:31] will retry after 3.099836206s: waiting for machine to come up
	I0318 21:58:23.522155   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:23.522490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:23.522517   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:23.522450   66223 retry.go:31] will retry after 4.512794132s: waiting for machine to come up
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:28.036425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.036898   65211 main.go:141] libmachine: (embed-certs-141758) Found IP for machine: 192.168.39.243
	I0318 21:58:28.036949   65211 main.go:141] libmachine: (embed-certs-141758) Reserving static IP address...
	I0318 21:58:28.036967   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has current primary IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.037428   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.037452   65211 main.go:141] libmachine: (embed-certs-141758) DBG | skip adding static IP to network mk-embed-certs-141758 - found existing host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"}
	I0318 21:58:28.037461   65211 main.go:141] libmachine: (embed-certs-141758) Reserved static IP address: 192.168.39.243
	I0318 21:58:28.037473   65211 main.go:141] libmachine: (embed-certs-141758) Waiting for SSH to be available...
	I0318 21:58:28.037485   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Getting to WaitForSSH function...
	I0318 21:58:28.039459   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039778   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.039810   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039928   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH client type: external
	I0318 21:58:28.039955   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa (-rw-------)
	I0318 21:58:28.039995   65211 main.go:141] libmachine: (embed-certs-141758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:28.040027   65211 main.go:141] libmachine: (embed-certs-141758) DBG | About to run SSH command:
	I0318 21:58:28.040044   65211 main.go:141] libmachine: (embed-certs-141758) DBG | exit 0
	I0318 21:58:28.169219   65211 main.go:141] libmachine: (embed-certs-141758) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:28.169554   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetConfigRaw
	I0318 21:58:28.170153   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.172372   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.172760   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.172787   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.173016   65211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:58:28.173186   65211 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:28.173203   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:28.173399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.175433   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175767   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.175802   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175920   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.176079   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176389   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.176553   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.176790   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.176805   65211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:28.285370   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:28.285407   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285629   65211 buildroot.go:166] provisioning hostname "embed-certs-141758"
	I0318 21:58:28.285651   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285856   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.288382   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288708   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.288739   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288863   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.289067   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289220   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289361   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.289515   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.289717   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.289735   65211 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-141758 && echo "embed-certs-141758" | sudo tee /etc/hostname
	I0318 21:58:28.420311   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-141758
	
	I0318 21:58:28.420351   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.422864   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.423245   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423431   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.423608   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423759   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423891   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.424044   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.424234   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.424256   65211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-141758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-141758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-141758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:28.549277   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:28.549307   65211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:28.549325   65211 buildroot.go:174] setting up certificates
	I0318 21:58:28.549334   65211 provision.go:84] configureAuth start
	I0318 21:58:28.549343   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.549572   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.551881   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552183   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.552205   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.554341   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554629   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.554656   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554752   65211 provision.go:143] copyHostCerts
	I0318 21:58:28.554812   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:28.554825   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:28.554912   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:28.555020   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:28.555032   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:28.555062   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:28.555145   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:28.555155   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:28.555192   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:28.555259   65211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-141758 san=[127.0.0.1 192.168.39.243 embed-certs-141758 localhost minikube]
	I0318 21:58:28.706111   65211 provision.go:177] copyRemoteCerts
	I0318 21:58:28.706158   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:28.706185   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.708537   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708795   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.708822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.709164   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.709335   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.709446   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:28.796199   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:28.827207   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:58:28.854273   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:28.880505   65211 provision.go:87] duration metric: took 331.161751ms to configureAuth
	I0318 21:58:28.880524   65211 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:28.880716   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:58:28.880801   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.883232   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.883583   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883753   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.883926   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884087   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884186   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.884339   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.884481   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.884496   65211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:29.164330   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:29.164357   65211 machine.go:97] duration metric: took 991.159236ms to provisionDockerMachine
	I0318 21:58:29.164370   65211 start.go:293] postStartSetup for "embed-certs-141758" (driver="kvm2")
	I0318 21:58:29.164381   65211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:29.164434   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.164734   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:29.164758   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.167400   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167696   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.167719   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167867   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.168065   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.168235   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.168352   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.256141   65211 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:29.261086   65211 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:29.261104   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:29.261157   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:29.261229   65211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:29.261309   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:29.271174   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:29.297161   65211 start.go:296] duration metric: took 132.781067ms for postStartSetup
	I0318 21:58:29.297192   65211 fix.go:56] duration metric: took 21.456139061s for fixHost
	I0318 21:58:29.297208   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.299741   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300102   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.300127   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300289   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.300480   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300750   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.300864   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:29.301028   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:29.301039   65211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:29.413842   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799109.363417589
	
	I0318 21:58:29.413869   65211 fix.go:216] guest clock: 1710799109.363417589
	I0318 21:58:29.413876   65211 fix.go:229] Guest: 2024-03-18 21:58:29.363417589 +0000 UTC Remote: 2024-03-18 21:58:29.297195181 +0000 UTC m=+297.765354372 (delta=66.222408ms)
	I0318 21:58:29.413892   65211 fix.go:200] guest clock delta is within tolerance: 66.222408ms
	I0318 21:58:29.413899   65211 start.go:83] releasing machines lock for "embed-certs-141758", held for 21.572869797s
	I0318 21:58:29.413932   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.414191   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:29.416929   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417293   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.417318   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417500   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418019   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418159   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418230   65211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:29.418275   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.418330   65211 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:29.418344   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.420728   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421022   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421053   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421076   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421228   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421413   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421464   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421493   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421593   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.421673   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421749   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.421828   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421960   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.422081   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.502548   65211 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:29.531994   65211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:29.681482   65211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:29.689671   65211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:29.689735   65211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:29.711660   65211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:29.711682   65211 start.go:494] detecting cgroup driver to use...
	I0318 21:58:29.711750   65211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:29.728159   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:29.742409   65211 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:29.742450   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:29.757587   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:29.772218   65211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:29.883164   65211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:30.046773   65211 docker.go:233] disabling docker service ...
	I0318 21:58:30.046845   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:30.065878   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:30.081551   65211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:30.223188   65211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:30.353535   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:30.370291   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:30.391728   65211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:58:30.391789   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.409204   65211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:30.409281   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.426464   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.439964   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.452097   65211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:30.464410   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.475990   65211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.495092   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.506831   65211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:30.517410   65211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:30.517463   65211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:30.532465   65211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:30.543958   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:30.679788   65211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:30.839388   65211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:30.839466   65211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:30.844666   65211 start.go:562] Will wait 60s for crictl version
	I0318 21:58:30.844720   65211 ssh_runner.go:195] Run: which crictl
	I0318 21:58:30.848886   65211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:30.888598   65211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:30.888686   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.921097   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.954037   65211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:58:30.955378   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:30.958352   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.958792   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:30.958822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.959064   65211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:30.963556   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:30.977788   65211 kubeadm.go:877] updating cluster {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:30.977899   65211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:58:30.977949   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:31.018843   65211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:58:31.018926   65211 ssh_runner.go:195] Run: which lz4
	I0318 21:58:31.023589   65211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:31.028416   65211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:31.028445   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:32.944637   65211 crio.go:462] duration metric: took 1.921083293s to copy over tarball
	I0318 21:58:32.944709   65211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:35.696230   65211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751490576s)
	I0318 21:58:35.696261   65211 crio.go:469] duration metric: took 2.751600779s to extract the tarball
	I0318 21:58:35.696271   65211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:35.739467   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:35.794398   65211 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:58:35.794427   65211 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:58:35.794436   65211 kubeadm.go:928] updating node { 192.168.39.243 8443 v1.28.4 crio true true} ...
	I0318 21:58:35.794559   65211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-141758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:35.794625   65211 ssh_runner.go:195] Run: crio config
	I0318 21:58:35.844849   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:35.844877   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:35.844888   65211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:35.844923   65211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-141758 NodeName:embed-certs-141758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:58:35.845069   65211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-141758"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:35.845124   65211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:58:35.856885   65211 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:35.856950   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:35.867990   65211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 21:58:35.887057   65211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:35.909244   65211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 21:58:35.931267   65211 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:35.935793   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:35.950323   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:36.093377   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:36.112548   65211 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758 for IP: 192.168.39.243
	I0318 21:58:36.112575   65211 certs.go:194] generating shared ca certs ...
	I0318 21:58:36.112596   65211 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:36.112766   65211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:36.112813   65211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:36.112822   65211 certs.go:256] generating profile certs ...
	I0318 21:58:36.112943   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/client.key
	I0318 21:58:36.113043   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key.d575a4ae
	I0318 21:58:36.113097   65211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key
	I0318 21:58:36.113263   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:36.113307   65211 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:36.113322   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:36.113359   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:36.113396   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:36.113429   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:36.113536   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:36.114412   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:36.147930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:36.177554   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:36.208374   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:36.243425   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:58:36.276720   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:58:36.317930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:36.345717   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:58:36.371655   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:36.396998   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:36.422750   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:36.448117   65211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:36.466558   65211 ssh_runner.go:195] Run: openssl version
	I0318 21:58:36.472888   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:36.484389   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489534   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489585   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.496045   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:36.507723   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:36.519030   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524214   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524267   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.531109   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:36.543912   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:36.556130   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561330   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561369   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.567883   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:36.579819   65211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:36.824046   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:36.831273   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:36.838571   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:36.845621   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:36.852423   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:36.859433   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:36.866091   65211 kubeadm.go:391] StartCluster: {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:36.866212   65211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:36.866263   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:36.912390   65211 cri.go:89] found id: ""
	I0318 21:58:36.912460   65211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:36.929896   65211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:36.929923   65211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:36.929931   65211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:36.929985   65211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:36.947191   65211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:36.948613   65211 kubeconfig.go:125] found "embed-certs-141758" server: "https://192.168.39.243:8443"
	I0318 21:58:36.951641   65211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:36.966095   65211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.243
	I0318 21:58:36.966135   65211 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:36.966150   65211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:36.966216   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:37.022620   65211 cri.go:89] found id: ""
	I0318 21:58:37.022680   65211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:37.042338   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:37.054534   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:37.054552   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:37.054588   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:37.066099   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:37.066166   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:37.077340   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:37.088158   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:37.088214   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:37.099190   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.110081   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:37.110118   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.121852   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:37.133161   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:37.133215   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:37.144199   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:37.155593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.271593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.921199   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.175721   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.264478   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.377591   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:38.377683   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:38.878031   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.377859   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.417546   65211 api_server.go:72] duration metric: took 1.039957218s to wait for apiserver process to appear ...
	I0318 21:58:39.417576   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:58:39.417599   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:39.418125   65211 api_server.go:269] stopped: https://192.168.39.243:8443/healthz: Get "https://192.168.39.243:8443/healthz": dial tcp 192.168.39.243:8443: connect: connection refused
	I0318 21:58:39.917663   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.450620   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.450656   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.450668   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.489722   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.489755   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.918487   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.924551   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:42.924584   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.418077   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.424938   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:43.424969   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.918053   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.922905   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 21:58:43.931126   65211 api_server.go:141] control plane version: v1.28.4
	I0318 21:58:43.931151   65211 api_server.go:131] duration metric: took 4.513568499s to wait for apiserver health ...
	I0318 21:58:43.931159   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:43.931173   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:43.932876   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:43.934155   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:58:43.948750   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:58:43.978849   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:58:43.991046   65211 system_pods.go:59] 8 kube-system pods found
	I0318 21:58:43.991082   65211 system_pods.go:61] "coredns-5dd5756b68-r9pft" [add358cf-d544-4107-a05f-5e60542ea456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:58:43.991089   65211 system_pods.go:61] "etcd-embed-certs-141758" [31274121-ec65-46b5-bcda-65698c28bd1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:58:43.991095   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [61e4c0db-7a20-4c93-83b3-de4738e82614] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:58:43.991100   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [c2ffe900-4e3a-4c21-ae8f-cd42475207c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:58:43.991105   65211 system_pods.go:61] "kube-proxy-klmnb" [45b0c762-4eaf-4e8a-b321-0d474f61086e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:58:43.991109   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [5aeed9aa-9d98-49c0-bf8a-3998738f6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:58:43.991114   65211 system_pods.go:61] "metrics-server-57f55c9bc5-vt7hj" [949e4c0f-6a76-4141-b30c-f27291873f14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:58:43.991123   65211 system_pods.go:61] "storage-provisioner" [0aca1af6-3221-4698-915b-cabb9da662bf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:58:43.991128   65211 system_pods.go:74] duration metric: took 12.25858ms to wait for pod list to return data ...
	I0318 21:58:43.991136   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:58:43.996109   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:58:43.996135   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 21:58:43.996146   65211 node_conditions.go:105] duration metric: took 5.004614ms to run NodePressure ...
	I0318 21:58:43.996163   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:44.227606   65211 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234823   65211 kubeadm.go:733] kubelet initialised
	I0318 21:58:44.234846   65211 kubeadm.go:734] duration metric: took 7.215375ms waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234854   65211 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:58:44.241197   65211 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.248990   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249008   65211 pod_ready.go:81] duration metric: took 7.784519ms for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.249016   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249022   65211 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.254792   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254820   65211 pod_ready.go:81] duration metric: took 5.788084ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.254833   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254846   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.261248   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261272   65211 pod_ready.go:81] duration metric: took 6.415486ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.261282   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261291   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.383016   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383056   65211 pod_ready.go:81] duration metric: took 121.750871ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.383069   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383078   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784241   65211 pod_ready.go:92] pod "kube-proxy-klmnb" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:44.784264   65211 pod_ready.go:81] duration metric: took 401.177044ms for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784272   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:48.950018   65699 start.go:364] duration metric: took 4m12.246849763s to acquireMachinesLock for "no-preload-963041"
	I0318 21:58:48.950078   65699 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:48.950087   65699 fix.go:54] fixHost starting: 
	I0318 21:58:48.950522   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:48.950556   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:48.966094   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0318 21:58:48.966492   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:48.966970   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:58:48.966994   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:48.967295   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:48.967443   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:58:48.967548   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:58:48.968800   65699 fix.go:112] recreateIfNeeded on no-preload-963041: state=Stopped err=<nil>
	I0318 21:58:48.968835   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	W0318 21:58:48.969105   65699 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:48.970900   65699 out.go:177] * Restarting existing kvm2 VM for "no-preload-963041" ...
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:46.795337   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:49.295100   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.299437   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:48.972407   65699 main.go:141] libmachine: (no-preload-963041) Calling .Start
	I0318 21:58:48.972572   65699 main.go:141] libmachine: (no-preload-963041) Ensuring networks are active...
	I0318 21:58:48.973251   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network default is active
	I0318 21:58:48.973606   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network mk-no-preload-963041 is active
	I0318 21:58:48.973992   65699 main.go:141] libmachine: (no-preload-963041) Getting domain xml...
	I0318 21:58:48.974629   65699 main.go:141] libmachine: (no-preload-963041) Creating domain...
	I0318 21:58:50.190010   65699 main.go:141] libmachine: (no-preload-963041) Waiting to get IP...
	I0318 21:58:50.190750   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.191241   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.191320   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.191220   66466 retry.go:31] will retry after 238.162453ms: waiting for machine to come up
	I0318 21:58:50.430778   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.431262   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.431292   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.431191   66466 retry.go:31] will retry after 318.744541ms: waiting for machine to come up
	I0318 21:58:50.751612   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.752051   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.752086   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.752007   66466 retry.go:31] will retry after 464.29047ms: waiting for machine to come up
	I0318 21:58:51.218462   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.219034   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.219062   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.218983   66466 retry.go:31] will retry after 476.466311ms: waiting for machine to come up
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:53.792483   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.696634   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.697179   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.697208   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.697099   66466 retry.go:31] will retry after 520.896381ms: waiting for machine to come up
	I0318 21:58:52.219861   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:52.220480   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:52.220506   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:52.220414   66466 retry.go:31] will retry after 872.240898ms: waiting for machine to come up
	I0318 21:58:53.094123   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.094547   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.094580   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.094499   66466 retry.go:31] will retry after 757.325359ms: waiting for machine to come up
	I0318 21:58:53.852954   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.853422   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.853453   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.853358   66466 retry.go:31] will retry after 1.459327383s: waiting for machine to come up
	I0318 21:58:55.313969   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:55.314382   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:55.314413   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:55.314328   66466 retry.go:31] will retry after 1.373606235s: waiting for machine to come up
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:57.057776   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:57.791727   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:57.791754   65211 pod_ready.go:81] duration metric: took 13.007474768s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:57.791769   65211 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:59.800074   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:56.689643   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:56.690039   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:56.690064   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:56.690020   66466 retry.go:31] will retry after 1.905319343s: waiting for machine to come up
	I0318 21:58:58.597961   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:58.598470   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:58.598501   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:58.598420   66466 retry.go:31] will retry after 2.720364267s: waiting for machine to come up
	I0318 21:59:01.321901   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:01.322290   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:01.322312   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:01.322254   66466 retry.go:31] will retry after 2.73029124s: waiting for machine to come up
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.299143   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.800475   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.054294   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:04.054715   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:04.054752   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:04.054671   66466 retry.go:31] will retry after 3.148777081s: waiting for machine to come up
	I0318 21:59:08.706453   65170 start.go:364] duration metric: took 55.86344587s to acquireMachinesLock for "default-k8s-diff-port-660775"
	I0318 21:59:08.706504   65170 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:59:08.706515   65170 fix.go:54] fixHost starting: 
	I0318 21:59:08.706934   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:08.706970   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:08.723564   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0318 21:59:08.723935   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:08.724359   65170 main.go:141] libmachine: Using API Version  1
	I0318 21:59:08.724381   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:08.724671   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:08.724874   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:08.725045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:59:08.726635   65170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-660775: state=Stopped err=<nil>
	I0318 21:59:08.726656   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	W0318 21:59:08.726813   65170 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:59:08.728839   65170 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-660775" ...
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.730181   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Start
	I0318 21:59:08.730374   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring networks are active...
	I0318 21:59:08.731140   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network default is active
	I0318 21:59:08.731488   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network mk-default-k8s-diff-port-660775 is active
	I0318 21:59:08.731850   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Getting domain xml...
	I0318 21:59:08.732544   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Creating domain...
	I0318 21:59:10.014924   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting to get IP...
	I0318 21:59:10.015822   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016215   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016299   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.016206   66608 retry.go:31] will retry after 301.369371ms: waiting for machine to come up
	I0318 21:59:07.205807   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206239   65699 main.go:141] libmachine: (no-preload-963041) Found IP for machine: 192.168.72.84
	I0318 21:59:07.206266   65699 main.go:141] libmachine: (no-preload-963041) Reserving static IP address...
	I0318 21:59:07.206281   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has current primary IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206636   65699 main.go:141] libmachine: (no-preload-963041) Reserved static IP address: 192.168.72.84
	I0318 21:59:07.206659   65699 main.go:141] libmachine: (no-preload-963041) Waiting for SSH to be available...
	I0318 21:59:07.206686   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.206711   65699 main.go:141] libmachine: (no-preload-963041) DBG | skip adding static IP to network mk-no-preload-963041 - found existing host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"}
	I0318 21:59:07.206728   65699 main.go:141] libmachine: (no-preload-963041) DBG | Getting to WaitForSSH function...
	I0318 21:59:07.208790   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209157   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.209202   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209306   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH client type: external
	I0318 21:59:07.209331   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa (-rw-------)
	I0318 21:59:07.209367   65699 main.go:141] libmachine: (no-preload-963041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:07.209381   65699 main.go:141] libmachine: (no-preload-963041) DBG | About to run SSH command:
	I0318 21:59:07.209395   65699 main.go:141] libmachine: (no-preload-963041) DBG | exit 0
	I0318 21:59:07.337357   65699 main.go:141] libmachine: (no-preload-963041) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:07.337688   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetConfigRaw
	I0318 21:59:07.338258   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.340609   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.340957   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.340996   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.341213   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:59:07.341396   65699 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:07.341462   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:07.341668   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.343956   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344275   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.344311   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344395   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.344580   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344756   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344891   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.345086   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.345264   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.345276   65699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:07.457491   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:07.457543   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457778   65699 buildroot.go:166] provisioning hostname "no-preload-963041"
	I0318 21:59:07.457802   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457975   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.460729   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461120   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.461145   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461286   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.461480   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461643   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461797   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.461980   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.462179   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.462193   65699 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-963041 && echo "no-preload-963041" | sudo tee /etc/hostname
	I0318 21:59:07.592194   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-963041
	
	I0318 21:59:07.592219   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.594794   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595141   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.595177   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595305   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.595484   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595673   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595836   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.595987   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.596144   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.596160   65699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-963041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-963041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-963041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:07.719593   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:07.719622   65699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:07.719655   65699 buildroot.go:174] setting up certificates
	I0318 21:59:07.719667   65699 provision.go:84] configureAuth start
	I0318 21:59:07.719681   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.719928   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.722544   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.722907   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.722935   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.723095   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.725108   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725391   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.725420   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725522   65699 provision.go:143] copyHostCerts
	I0318 21:59:07.725582   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:07.725595   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:07.725665   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:07.725780   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:07.725792   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:07.725817   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:07.725874   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:07.725881   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:07.725898   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:07.725945   65699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.no-preload-963041 san=[127.0.0.1 192.168.72.84 localhost minikube no-preload-963041]
	I0318 21:59:07.893632   65699 provision.go:177] copyRemoteCerts
	I0318 21:59:07.893685   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:07.893711   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.896227   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896501   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.896527   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896692   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.896859   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.897035   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.897205   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:07.983501   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:08.014432   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:59:08.043755   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:59:08.074388   65699 provision.go:87] duration metric: took 354.707214ms to configureAuth
	I0318 21:59:08.074413   65699 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:08.074571   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:08.074638   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.077314   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077658   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.077690   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077837   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.077996   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078150   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078289   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.078435   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.078582   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.078596   65699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:08.446711   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:08.446745   65699 machine.go:97] duration metric: took 1.105332987s to provisionDockerMachine
	I0318 21:59:08.446757   65699 start.go:293] postStartSetup for "no-preload-963041" (driver="kvm2")
	I0318 21:59:08.446772   65699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:08.446787   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.447090   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:08.447118   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.449551   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.449917   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.449955   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.450117   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.450308   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.450471   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.450611   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.542283   65699 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:08.547389   65699 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:08.547423   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:08.547501   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:08.547606   65699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:08.547732   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:08.558721   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:08.586136   65699 start.go:296] duration metric: took 139.367706ms for postStartSetup
	I0318 21:59:08.586177   65699 fix.go:56] duration metric: took 19.636089577s for fixHost
	I0318 21:59:08.586201   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.588809   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589192   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.589219   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589435   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.589604   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589731   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589838   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.589972   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.590182   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.590197   65699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:08.706260   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799148.650279332
	
	I0318 21:59:08.706283   65699 fix.go:216] guest clock: 1710799148.650279332
	I0318 21:59:08.706293   65699 fix.go:229] Guest: 2024-03-18 21:59:08.650279332 +0000 UTC Remote: 2024-03-18 21:59:08.586181408 +0000 UTC m=+272.029432082 (delta=64.097924ms)
	I0318 21:59:08.706337   65699 fix.go:200] guest clock delta is within tolerance: 64.097924ms
	I0318 21:59:08.706350   65699 start.go:83] releasing machines lock for "no-preload-963041", held for 19.756290817s
	I0318 21:59:08.706384   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.706707   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:08.709113   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709389   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.709417   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709561   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710009   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710155   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710229   65699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:08.710278   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.710330   65699 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:08.710349   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.713131   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713154   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713464   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713492   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713521   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713536   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713632   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713739   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713824   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713987   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713988   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714117   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.714177   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714337   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.827151   65699 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:08.833847   65699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:08.985638   65699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:08.992294   65699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:08.992372   65699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:09.009419   65699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:09.009444   65699 start.go:494] detecting cgroup driver to use...
	I0318 21:59:09.009509   65699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:09.031942   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:09.051842   65699 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:09.051901   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:09.068136   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:09.084445   65699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:09.234323   65699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:09.402144   65699 docker.go:233] disabling docker service ...
	I0318 21:59:09.402210   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:09.419960   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:09.434836   65699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:09.572242   65699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:09.718817   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:09.734607   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:09.756470   65699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:09.756533   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.768595   65699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:09.768685   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.780726   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.800700   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.817396   65699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:09.829896   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.842211   65699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.867273   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.880909   65699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:09.893254   65699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:09.893297   65699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:09.910897   65699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:09.922400   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:10.065248   65699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:10.223498   65699 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:10.223577   65699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:10.230686   65699 start.go:562] Will wait 60s for crictl version
	I0318 21:59:10.230752   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.235527   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:10.278655   65699 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:10.278756   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.310992   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.344925   65699 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:59:07.298973   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:09.799803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:10.346255   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:10.349081   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349418   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:10.349437   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349657   65699 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:10.354793   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:10.369744   65699 kubeadm.go:877] updating cluster {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:10.369893   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:59:10.369951   65699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:10.409975   65699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 21:59:10.410001   65699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:59:10.410062   65699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.410074   65699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.410086   65699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.410122   65699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.410148   65699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.410166   65699 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 21:59:10.410213   65699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.410223   65699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411690   65699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.411695   65699 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 21:59:10.411730   65699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.411747   65699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.411764   65699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.411793   65699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.553195   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 21:59:10.553249   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.555774   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.559123   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.562266   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.571390   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.592690   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.702213   65699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 21:59:10.702265   65699 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.702314   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857028   65699 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 21:59:10.857072   65699 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.857087   65699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 21:59:10.857117   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857146   65699 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.857154   65699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 21:59:10.857180   65699 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.857197   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857214   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857211   65699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 21:59:10.857250   65699 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.857254   65699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 21:59:10.857264   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.857275   65699 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.857282   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857305   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.872164   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.872195   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.872268   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.927043   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.927147   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.927095   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.927219   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.972625   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 21:59:10.972740   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:11.016239   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 21:59:11.016291   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.016356   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:11.016380   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.047703   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 21:59:11.047732   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047784   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047849   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.047952   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.069007   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 21:59:11.069064   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 21:59:11.069095   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 21:59:11.069126   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 21:59:11.069139   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.319858   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320279   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320310   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.320224   66608 retry.go:31] will retry after 253.332307ms: waiting for machine to come up
	I0318 21:59:10.575748   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576242   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576271   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.576194   66608 retry.go:31] will retry after 484.439329ms: waiting for machine to come up
	I0318 21:59:11.061837   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062291   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.062247   66608 retry.go:31] will retry after 520.757249ms: waiting for machine to come up
	I0318 21:59:11.585112   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585541   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585571   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.585485   66608 retry.go:31] will retry after 482.335377ms: waiting for machine to come up
	I0318 21:59:12.068813   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069420   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069456   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:12.069374   66608 retry.go:31] will retry after 936.563875ms: waiting for machine to come up
	I0318 21:59:13.007582   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.007986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.008012   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.007945   66608 retry.go:31] will retry after 864.468016ms: waiting for machine to come up
	I0318 21:59:13.874400   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874910   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874942   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.874875   66608 retry.go:31] will retry after 1.239808671s: waiting for machine to come up
	I0318 21:59:15.116440   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116834   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116855   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:15.116784   66608 retry.go:31] will retry after 1.208141339s: waiting for machine to come up
	I0318 21:59:11.804059   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:14.301199   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:16.301517   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:11.928081   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.330891   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.28291236s)
	I0318 21:59:14.330933   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330948   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.261785854s)
	I0318 21:59:14.330971   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330974   65699 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.402863992s)
	I0318 21:59:14.330979   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.283167958s)
	I0318 21:59:14.330996   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 21:59:14.331011   65699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 21:59:14.331019   65699 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331043   65699 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.331064   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331086   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:14.336430   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327415   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:16.327350   66608 retry.go:31] will retry after 2.24875206s: waiting for machine to come up
	I0318 21:59:18.578068   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578644   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:18.578589   66608 retry.go:31] will retry after 2.267791851s: waiting for machine to come up
	I0318 21:59:18.800406   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:20.800524   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:18.591731   65699 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.255273393s)
	I0318 21:59:18.591789   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 21:59:18.591897   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:18.591937   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.260848845s)
	I0318 21:59:18.591958   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 21:59:18.591986   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:18.592046   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:19.859577   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.267508443s)
	I0318 21:59:19.859608   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 21:59:19.859637   65699 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:19.859641   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.267714811s)
	I0318 21:59:19.859674   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 21:59:19.859685   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.847586   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848099   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848135   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:20.848048   66608 retry.go:31] will retry after 2.918466892s: waiting for machine to come up
	I0318 21:59:23.768491   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.768999   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.769030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:23.768962   66608 retry.go:31] will retry after 4.373256501s: waiting for machine to come up
	I0318 21:59:22.800765   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:24.801392   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:21.944666   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.084944906s)
	I0318 21:59:21.944700   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 21:59:21.944720   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:21.944766   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:24.714752   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.769964684s)
	I0318 21:59:24.714793   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 21:59:24.714827   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:24.714884   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.146019   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146507   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Found IP for machine: 192.168.50.150
	I0318 21:59:28.146533   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserving static IP address...
	I0318 21:59:28.146549   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has current primary IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146939   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.146966   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserved static IP address: 192.168.50.150
	I0318 21:59:28.146986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | skip adding static IP to network mk-default-k8s-diff-port-660775 - found existing host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"}
	I0318 21:59:28.147006   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Getting to WaitForSSH function...
	I0318 21:59:28.147030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for SSH to be available...
	I0318 21:59:28.149408   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149771   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.149799   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149929   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH client type: external
	I0318 21:59:28.149978   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa (-rw-------)
	I0318 21:59:28.150020   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:28.150039   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | About to run SSH command:
	I0318 21:59:28.150050   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | exit 0
	I0318 21:59:28.273437   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:28.273768   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetConfigRaw
	I0318 21:59:28.274402   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.277330   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.277757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277997   65170 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:59:28.278217   65170 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:28.278240   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:28.278435   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.280754   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281149   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.281178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281318   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.281495   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281646   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281796   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.281955   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.282163   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.282185   65170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:28.390614   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:28.390642   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.390896   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:59:28.390923   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.391095   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.394421   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.394838   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.394876   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.395178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.395410   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395775   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.395953   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.396145   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.396160   65170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-660775 && echo "default-k8s-diff-port-660775" | sudo tee /etc/hostname
	I0318 21:59:28.522303   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-660775
	
	I0318 21:59:28.522347   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.525224   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.525667   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525789   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.525961   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526122   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.526471   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.526651   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.526676   65170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-660775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-660775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-660775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:28.641488   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:28.641521   65170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:28.641547   65170 buildroot.go:174] setting up certificates
	I0318 21:59:28.641555   65170 provision.go:84] configureAuth start
	I0318 21:59:28.641564   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.641871   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.644934   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.645301   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645425   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.647753   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648089   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.648119   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648360   65170 provision.go:143] copyHostCerts
	I0318 21:59:28.648423   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:28.648435   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:28.648507   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:28.648620   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:28.648631   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:28.648660   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:28.648731   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:28.648740   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:28.648769   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:28.648829   65170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-660775 san=[127.0.0.1 192.168.50.150 default-k8s-diff-port-660775 localhost minikube]
	I0318 21:59:28.697191   65170 provision.go:177] copyRemoteCerts
	I0318 21:59:28.697253   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:28.697274   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.699919   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700237   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.700269   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700477   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.700694   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.700882   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.701060   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:28.793840   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:28.829285   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 21:59:28.857628   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:59:28.886344   65170 provision.go:87] duration metric: took 244.778215ms to configureAuth
	I0318 21:59:28.886366   65170 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:28.886527   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:59:28.886593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.889885   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890321   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.890351   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890534   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.890721   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.890879   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.891013   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.891190   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.891366   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.891399   65170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:29.189002   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:29.189033   65170 machine.go:97] duration metric: took 910.801375ms to provisionDockerMachine
	I0318 21:59:29.189046   65170 start.go:293] postStartSetup for "default-k8s-diff-port-660775" (driver="kvm2")
	I0318 21:59:29.189058   65170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:29.189083   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.189409   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:29.189438   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.192164   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.192512   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.192866   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.193045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.193190   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.277850   65170 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:29.282886   65170 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:29.282909   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:29.282975   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:29.283065   65170 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:29.283172   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:29.296052   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:29.323906   65170 start.go:296] duration metric: took 134.847993ms for postStartSetup
	I0318 21:59:29.323945   65170 fix.go:56] duration metric: took 20.61742941s for fixHost
	I0318 21:59:29.323969   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.326616   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.326920   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.327063   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.327300   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327472   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327622   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.327853   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:29.328058   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:29.328070   65170 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:29.430348   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799169.377980776
	
	I0318 21:59:29.430377   65170 fix.go:216] guest clock: 1710799169.377980776
	I0318 21:59:29.430386   65170 fix.go:229] Guest: 2024-03-18 21:59:29.377980776 +0000 UTC Remote: 2024-03-18 21:59:29.323950953 +0000 UTC m=+359.071824665 (delta=54.029823ms)
	I0318 21:59:29.430411   65170 fix.go:200] guest clock delta is within tolerance: 54.029823ms
	I0318 21:59:29.430420   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 20.723939352s
	I0318 21:59:29.430450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.430727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:29.433339   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433686   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.433713   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433865   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434308   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434531   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434632   65170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:29.434682   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.434783   65170 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:29.434811   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.437380   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437479   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437731   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437760   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437829   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437880   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.438033   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438170   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438244   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438332   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438393   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438603   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.438694   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.540670   65170 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:29.547318   65170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:29.704221   65170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:29.710762   65170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:29.710832   65170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:29.727820   65170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:29.727838   65170 start.go:494] detecting cgroup driver to use...
	I0318 21:59:29.727905   65170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:29.745750   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:29.760984   65170 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:29.761024   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:29.776639   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:29.791749   65170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:29.914380   65170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:30.096200   65170 docker.go:233] disabling docker service ...
	I0318 21:59:30.096281   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:30.112512   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:30.126090   65170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:30.258617   65170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:30.397700   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:30.420478   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:30.443197   65170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:30.443282   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.455577   65170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:30.455630   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.467898   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.480041   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.492501   65170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:30.505178   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.517657   65170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.537376   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.554749   65170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:30.570281   65170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:30.570352   65170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:30.587991   65170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:30.600354   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:30.744678   65170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:30.902192   65170 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:30.902279   65170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:30.907869   65170 start.go:562] Will wait 60s for crictl version
	I0318 21:59:30.907937   65170 ssh_runner.go:195] Run: which crictl
	I0318 21:59:30.913588   65170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:30.957344   65170 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:30.957431   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:30.991141   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:31.024452   65170 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:59:27.301221   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:29.799576   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:26.781379   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.066468133s)
	I0318 21:59:26.781415   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 21:59:26.781445   65699 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:26.781493   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:27.747707   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 21:59:27.747764   65699 cache_images.go:123] Successfully loaded all cached images
	I0318 21:59:27.747769   65699 cache_images.go:92] duration metric: took 17.337757279s to LoadCachedImages
	I0318 21:59:27.747781   65699 kubeadm.go:928] updating node { 192.168.72.84 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:59:27.747907   65699 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-963041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:27.747986   65699 ssh_runner.go:195] Run: crio config
	I0318 21:59:27.810020   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:27.810048   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:27.810060   65699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:27.810078   65699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963041 NodeName:no-preload-963041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:27.810242   65699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963041"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:27.810327   65699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:59:27.823120   65699 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:27.823172   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:27.834742   65699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:59:27.854365   65699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:59:27.872873   65699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 21:59:27.891245   65699 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:27.895305   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:27.907928   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:28.044997   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:28.064471   65699 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041 for IP: 192.168.72.84
	I0318 21:59:28.064489   65699 certs.go:194] generating shared ca certs ...
	I0318 21:59:28.064503   65699 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:28.064668   65699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:28.064733   65699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:28.064747   65699 certs.go:256] generating profile certs ...
	I0318 21:59:28.064847   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/client.key
	I0318 21:59:28.064927   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key.53f57e82
	I0318 21:59:28.064975   65699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key
	I0318 21:59:28.065090   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:28.065140   65699 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:28.065154   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:28.065190   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:28.065218   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:28.065244   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:28.065292   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:28.066189   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:28.108239   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:28.147385   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:28.191255   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:28.231079   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:59:28.269730   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:59:28.302326   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:28.331762   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:28.359487   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:28.390196   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:28.422323   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:28.452212   65699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:28.476910   65699 ssh_runner.go:195] Run: openssl version
	I0318 21:59:28.483480   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:28.495230   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500728   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500771   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.507487   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:28.520368   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:28.533700   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540767   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540817   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.549380   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:28.566307   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:28.582377   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589139   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589192   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.597396   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:28.610189   65699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:28.616488   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:28.625547   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:28.634680   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:28.643077   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:28.652470   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:28.660641   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:28.669216   65699 kubeadm.go:391] StartCluster: {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:28.669342   65699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:28.669444   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.719357   65699 cri.go:89] found id: ""
	I0318 21:59:28.719427   65699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:28.733158   65699 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:28.733179   65699 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:28.733186   65699 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:28.733234   65699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:28.744804   65699 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:28.745805   65699 kubeconfig.go:125] found "no-preload-963041" server: "https://192.168.72.84:8443"
	I0318 21:59:28.747888   65699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:28.757871   65699 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.84
	I0318 21:59:28.757896   65699 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:28.757918   65699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:28.757964   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.805988   65699 cri.go:89] found id: ""
	I0318 21:59:28.806057   65699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:28.829257   65699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:28.841515   65699 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:28.841543   65699 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:28.841594   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:59:28.853433   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:28.853499   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:28.864593   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:59:28.875236   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:28.875285   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:28.887756   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.898219   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:28.898271   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.909308   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:59:28.919480   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:28.919540   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:28.930305   65699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:28.941125   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:29.056129   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.261585   65699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.205423679s)
	I0318 21:59:30.261614   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.498583   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.589160   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.713046   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:30.713150   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.214160   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.025614   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:31.028381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028758   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:31.028783   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028960   65170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:31.033836   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:31.048652   65170 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:31.048798   65170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:59:31.048853   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:31.089246   65170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:59:31.089322   65170 ssh_runner.go:195] Run: which lz4
	I0318 21:59:31.094026   65170 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:59:31.098900   65170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:59:31.098929   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:59:33.166556   65170 crio.go:462] duration metric: took 2.072562246s to copy over tarball
	I0318 21:59:33.166639   65170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:59:31.810567   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:34.301018   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:36.346463   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:31.714009   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.762157   65699 api_server.go:72] duration metric: took 1.049110677s to wait for apiserver process to appear ...
	I0318 21:59:31.762188   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:31.762210   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:31.762737   65699 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	I0318 21:59:32.263205   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.738750   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.738785   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.738802   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.804061   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.804102   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.804116   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.842097   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.842144   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:35.262351   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.267395   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.267439   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:35.763016   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.775072   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.775109   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.262338   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:36.267165   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:36.267207   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.762879   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.074225   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:37.074263   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:37.262637   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.267514   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 21:59:37.275551   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 21:59:37.275579   65699 api_server.go:131] duration metric: took 5.513383348s to wait for apiserver health ...
	I0318 21:59:37.275590   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:37.275598   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:37.496330   65699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:37.641915   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:37.659277   65699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:37.684019   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:38.075296   65699 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:38.075333   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:38.075353   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:38.075367   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:38.075375   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:38.075388   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:38.075407   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:38.075418   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:38.075429   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:38.075440   65699 system_pods.go:74] duration metric: took 391.399859ms to wait for pod list to return data ...
	I0318 21:59:38.075452   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:38.252627   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:38.252659   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:38.252670   65699 node_conditions.go:105] duration metric: took 177.209294ms to run NodePressure ...
	I0318 21:59:38.252692   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.662257   65699 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670807   65699 kubeadm.go:733] kubelet initialised
	I0318 21:59:38.670836   65699 kubeadm.go:734] duration metric: took 8.550399ms waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670846   65699 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:38.680740   65699 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.689134   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689157   65699 pod_ready.go:81] duration metric: took 8.393104ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.689169   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689178   65699 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.693796   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693815   65699 pod_ready.go:81] duration metric: took 4.628403ms for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.693824   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693829   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.701225   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701245   65699 pod_ready.go:81] duration metric: took 7.410052ms for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.701254   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701262   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.707848   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707871   65699 pod_ready.go:81] duration metric: took 6.598987ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.707882   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707889   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.066641   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066668   65699 pod_ready.go:81] duration metric: took 358.769058ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.066679   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066687   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.466406   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466440   65699 pod_ready.go:81] duration metric: took 399.746217ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.466449   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466455   65699 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.866206   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866232   65699 pod_ready.go:81] duration metric: took 399.76891ms for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.866240   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866247   65699 pod_ready.go:38] duration metric: took 1.195391629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:39.866263   65699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:59:39.879772   65699 ops.go:34] apiserver oom_adj: -16
	I0318 21:59:39.879796   65699 kubeadm.go:591] duration metric: took 11.146603139s to restartPrimaryControlPlane
	I0318 21:59:39.879807   65699 kubeadm.go:393] duration metric: took 11.21059758s to StartCluster
	I0318 21:59:39.879825   65699 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.879915   65699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:59:39.881739   65699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.881970   65699 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:59:39.883934   65699 out.go:177] * Verifying Kubernetes components...
	I0318 21:59:39.882064   65699 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:59:39.882254   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:39.885913   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:39.885924   65699 addons.go:69] Setting metrics-server=true in profile "no-preload-963041"
	I0318 21:59:39.885932   65699 addons.go:69] Setting default-storageclass=true in profile "no-preload-963041"
	I0318 21:59:39.885950   65699 addons.go:234] Setting addon metrics-server=true in "no-preload-963041"
	W0318 21:59:39.885958   65699 addons.go:243] addon metrics-server should already be in state true
	I0318 21:59:39.885966   65699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963041"
	I0318 21:59:39.885918   65699 addons.go:69] Setting storage-provisioner=true in profile "no-preload-963041"
	I0318 21:59:39.885985   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886000   65699 addons.go:234] Setting addon storage-provisioner=true in "no-preload-963041"
	W0318 21:59:39.886052   65699 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:59:39.886075   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886384   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886403   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886437   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886392   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886448   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886438   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.902103   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0318 21:59:39.902574   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.903192   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.903211   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.903568   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.904113   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.904142   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.908122   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 21:59:39.908269   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0318 21:59:39.908566   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.908639   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.909237   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.909251   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.909662   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.909834   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.913534   65699 addons.go:234] Setting addon default-storageclass=true in "no-preload-963041"
	W0318 21:59:39.913558   65699 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:59:39.913586   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.913959   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.913992   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.921260   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.921284   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.921661   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.922725   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.922778   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.925575   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0318 21:59:39.926170   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.926799   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.926819   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.933014   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0318 21:59:39.933066   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.934464   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.934527   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.935441   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.935456   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.936236   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.936821   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.936870   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.936983   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.938986   65699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:39.940103   65699 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:39.940115   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:59:39.940128   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.942712   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943138   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.943168   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943415   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.943574   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.943690   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.943828   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.944813   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0318 21:59:39.961605   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.962117   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.962140   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.962564   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.962745   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.964606   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.970697   65699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.222479   65170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055805805s)
	I0318 21:59:36.222507   65170 crio.go:469] duration metric: took 3.055923767s to extract the tarball
	I0318 21:59:36.222515   65170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:59:36.265990   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:36.314679   65170 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:59:36.314704   65170 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:59:36.314714   65170 kubeadm.go:928] updating node { 192.168.50.150 8444 v1.28.4 crio true true} ...
	I0318 21:59:36.314828   65170 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-660775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:36.314900   65170 ssh_runner.go:195] Run: crio config
	I0318 21:59:36.375889   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:36.375908   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:36.375916   65170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:36.375935   65170 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-660775 NodeName:default-k8s-diff-port-660775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:36.376058   65170 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-660775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:36.376117   65170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:59:36.387851   65170 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:36.387905   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:36.398095   65170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0318 21:59:36.416507   65170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:59:36.437165   65170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0318 21:59:36.458125   65170 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:36.462688   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:36.476913   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:36.629523   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:36.648679   65170 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775 for IP: 192.168.50.150
	I0318 21:59:36.648697   65170 certs.go:194] generating shared ca certs ...
	I0318 21:59:36.648717   65170 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:36.648870   65170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:36.648942   65170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:36.648956   65170 certs.go:256] generating profile certs ...
	I0318 21:59:36.649061   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/client.key
	I0318 21:59:36.649136   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key.6eb93750
	I0318 21:59:36.649181   65170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key
	I0318 21:59:36.649342   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:36.649408   65170 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:36.649427   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:36.649465   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:36.649502   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:36.649524   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:36.649563   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:36.650116   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:36.709130   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:36.777530   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:36.822349   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:36.861155   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 21:59:36.899264   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:59:36.930697   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:36.960715   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:36.992062   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:37.020001   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:37.051443   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:37.080115   65170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:37.102221   65170 ssh_runner.go:195] Run: openssl version
	I0318 21:59:37.111020   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:37.127447   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132675   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132730   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.139092   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:37.151349   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:37.166470   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172601   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172656   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.179404   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:37.192628   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:37.206758   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211839   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211882   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.218285   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:37.230291   65170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:37.235312   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:37.242399   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:37.249658   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:37.256458   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:37.263110   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:37.270329   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:37.277040   65170 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:37.277140   65170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:37.277176   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.320525   65170 cri.go:89] found id: ""
	I0318 21:59:37.320595   65170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:37.332584   65170 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:37.332602   65170 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:37.332608   65170 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:37.332678   65170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:37.348017   65170 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:37.349557   65170 kubeconfig.go:125] found "default-k8s-diff-port-660775" server: "https://192.168.50.150:8444"
	I0318 21:59:37.352826   65170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:37.367223   65170 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.150
	I0318 21:59:37.367256   65170 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:37.367267   65170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:37.367315   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.411319   65170 cri.go:89] found id: ""
	I0318 21:59:37.411401   65170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:37.431545   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:37.442587   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:37.442610   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:37.442661   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 21:59:37.452384   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:37.452439   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:37.462519   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 21:59:37.472669   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:37.472728   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:37.483107   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.493177   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:37.493224   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.503546   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 21:59:37.513471   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:37.513512   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:37.524147   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:37.534940   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:37.665308   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.882330   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216992532s)
	I0318 21:59:38.882356   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.110948   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.217267   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.332300   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:39.332389   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.833190   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.972027   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 21:59:39.972078   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 21:59:39.972109   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.975122   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975608   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.975627   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975994   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.976196   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.976371   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.976663   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.982859   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0318 21:59:39.983263   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.983860   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.983904   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.984308   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.984558   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.986338   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.986645   65699 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:39.986690   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:59:39.986718   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.989398   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989741   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.989999   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989951   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.990229   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.990392   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.990517   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:40.115233   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:40.136271   65699 node_ready.go:35] waiting up to 6m0s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:40.232668   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:40.234394   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 21:59:40.234417   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 21:59:40.256237   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:40.301845   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 21:59:40.301873   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 21:59:40.354405   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:40.354435   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 21:59:40.377996   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:41.389416   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.156705132s)
	I0318 21:59:41.389429   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133120616s)
	I0318 21:59:41.389470   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389475   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389482   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389486   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389763   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389783   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389792   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389799   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389828   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.389874   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389890   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389899   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389938   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.390199   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390398   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.390339   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390375   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390451   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390470   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.397714   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.397736   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.397951   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.397999   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.398017   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.415620   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037584799s)
	I0318 21:59:41.415673   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.415684   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.415964   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.415992   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.416007   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416016   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.416027   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.416207   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.416220   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416229   65699 addons.go:470] Verifying addon metrics-server=true in "no-preload-963041"
	I0318 21:59:41.418761   65699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 21:59:38.798943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:40.800913   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:41.420038   65699 addons.go:505] duration metric: took 1.537986468s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 21:59:40.332810   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.411342   65170 api_server.go:72] duration metric: took 1.079036948s to wait for apiserver process to appear ...
	I0318 21:59:40.411371   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:40.411394   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:40.411932   65170 api_server.go:269] stopped: https://192.168.50.150:8444/healthz: Get "https://192.168.50.150:8444/healthz": dial tcp 192.168.50.150:8444: connect: connection refused
	I0318 21:59:40.911545   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.377410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.377443   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.377471   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.426410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.426468   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.426485   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.448464   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.448523   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:43.912498   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.918271   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.918309   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.411824   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.422200   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:44.422223   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.911509   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.916884   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 21:59:44.928835   65170 api_server.go:141] control plane version: v1.28.4
	I0318 21:59:44.928862   65170 api_server.go:131] duration metric: took 4.517483413s to wait for apiserver health ...
	I0318 21:59:44.928872   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:44.928881   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:44.930794   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.932164   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:44.959217   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:45.002449   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:45.017348   65170 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:45.017394   65170 system_pods.go:61] "coredns-5dd5756b68-cjq2v" [9ae899ef-63e4-407d-9013-71552ec87614] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:45.017407   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [286b98ba-bc9e-4e2f-984c-d7b2447aef15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:45.017417   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [7a0db461-f8d5-4331-993e-d7b9345159e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:45.017428   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [e4f5859a-dfcc-41d8-9a17-acb601449821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:45.017443   65170 system_pods.go:61] "kube-proxy-qt2m6" [c3c7c6db-4935-4079-b0e7-60ba2cd886b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:45.017450   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [7115eef0-5ff4-4dfe-9135-88ad8f698e43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:45.017461   65170 system_pods.go:61] "metrics-server-57f55c9bc5-5dtf5" [b19191ee-e2db-4392-82e2-1a95fae76101] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:45.017489   65170 system_pods.go:61] "storage-provisioner" [045d4b30-47a3-4c80-a9e8-c36ef7395e6c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:45.017498   65170 system_pods.go:74] duration metric: took 15.027239ms to wait for pod list to return data ...
	I0318 21:59:45.017511   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:45.020962   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:45.020982   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:45.020991   65170 node_conditions.go:105] duration metric: took 3.47292ms to run NodePressure ...
	I0318 21:59:45.021007   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:45.277662   65170 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282939   65170 kubeadm.go:733] kubelet initialised
	I0318 21:59:45.282958   65170 kubeadm.go:734] duration metric: took 5.277143ms waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282965   65170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.289546   65170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:43.299509   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:45.300875   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:42.142145   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:44.641863   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:45.640660   65699 node_ready.go:49] node "no-preload-963041" has status "Ready":"True"
	I0318 21:59:45.640686   65699 node_ready.go:38] duration metric: took 5.50437071s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:45.640697   65699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.647087   65699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652062   65699 pod_ready.go:92] pod "coredns-76f75df574-6mtzp" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.652081   65699 pod_ready.go:81] duration metric: took 4.969873ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652091   65699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.296790   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298834   65170 pod_ready.go:81] duration metric: took 9.259848ms for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.298849   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298868   65170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.307325   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307367   65170 pod_ready.go:81] duration metric: took 8.486967ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.307380   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307389   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.319473   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319498   65170 pod_ready.go:81] duration metric: took 12.100242ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.319514   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319522   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.407356   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407379   65170 pod_ready.go:81] duration metric: took 87.846686ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.407390   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407395   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806835   65170 pod_ready.go:92] pod "kube-proxy-qt2m6" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.806866   65170 pod_ready.go:81] duration metric: took 399.462221ms for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806878   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:47.814286   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:47.799616   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:50.300118   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:46.659819   65699 pod_ready.go:92] pod "etcd-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:46.659855   65699 pod_ready.go:81] duration metric: took 1.007755238s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:46.659868   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:48.669033   65699 pod_ready.go:102] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:51.168202   65699 pod_ready.go:92] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.168229   65699 pod_ready.go:81] duration metric: took 4.508354098s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.168240   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174243   65699 pod_ready.go:92] pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.174268   65699 pod_ready.go:81] duration metric: took 6.018685ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174280   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179279   65699 pod_ready.go:92] pod "kube-proxy-kkrzx" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.179300   65699 pod_ready.go:81] duration metric: took 5.012711ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179311   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185651   65699 pod_ready.go:92] pod "kube-scheduler-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.185670   65699 pod_ready.go:81] duration metric: took 6.351567ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185678   65699 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.315135   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.814432   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.798645   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:54.800561   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:53.191834   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.192346   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 21:59:55.313448   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.813863   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:56.813885   65170 pod_ready.go:81] duration metric: took 11.006997984s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:56.813893   65170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:58.820535   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.802709   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:59.299235   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:01.299761   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:57.694309   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:00.192594   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:00.821070   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.821300   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:03.301922   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.799844   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.693235   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:04.693324   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:05.322655   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.820557   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.821227   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.799881   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.802803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:06.694723   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:11.821593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:13.821792   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.299680   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.300073   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:11.693179   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.194532   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.822593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.321691   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.798687   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.802403   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.302525   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.692709   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.692875   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:20.693620   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:20.820297   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:22.821250   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:24.825337   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.800104   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:26.299105   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.193665   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.194188   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:27.321417   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:29.326924   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:28.798399   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.800456   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:27.692517   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.194633   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:31.821301   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:34.322210   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:33.299227   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.800314   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:32.693924   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.191865   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:36.324995   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.820800   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.298887   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.299570   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.193636   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:39.693273   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:40.821224   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:43.319945   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.300484   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.300924   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.302264   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.192508   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.192743   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:45.321277   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:47.821205   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.822803   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:48.799135   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.801798   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.692554   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.193102   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:51.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:52.320478   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:54.321815   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.301031   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:55.799954   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.693639   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.193657   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.820977   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.822348   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:57.804405   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.299016   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.692166   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.692526   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:01.319955   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:03.324127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.299297   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:04.299721   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.694116   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.193971   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.821257   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.320779   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:06.799599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.800534   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.301906   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:07.195710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:09.691770   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:10.321379   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:12.321708   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.324578   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:13.303344   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:15.799107   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.692795   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.192711   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:16.192974   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:16.820809   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.820856   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:17.800874   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.301479   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.691838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.692582   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:20.821237   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.320180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:22.302559   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:24.800442   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.193491   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:25.692671   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:25.322575   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.323097   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.821127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.302001   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.799369   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.693253   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.192347   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:32.324221   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.821547   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.301599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.798017   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.692835   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.693519   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:37.323580   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.325038   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.798108   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:38.799072   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:40.799797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.694495   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.192489   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:41.193100   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:41.821043   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:44.322040   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:42.802267   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.301098   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:43.692766   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.693595   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.322167   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.322495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:47.800006   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.298196   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.193259   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.195154   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:50.821598   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:53.321546   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.302659   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:54.799292   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.692794   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.192968   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.821471   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.822495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.299408   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.299964   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.193704   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.194959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:00.321126   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:02.321899   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.798818   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:03.800605   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:05.801459   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.693520   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.192449   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:06.192843   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:06.821775   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:09.322004   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.299572   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:10.799620   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.693106   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.192341   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.821139   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.821610   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.299590   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.303956   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.692089   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.693750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:16.320827   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:18.321654   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.800563   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.300888   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.694101   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.191376   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:20.322586   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.820488   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.820555   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.798797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.799501   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.192760   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.193279   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:26.821222   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.321682   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:27.302631   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.799175   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.693239   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.192207   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.193072   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:31.820868   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.822075   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.802719   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:34.301730   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.692959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.194871   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.320427   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.321178   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.799943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:39.300691   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.699873   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.193658   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:40.321388   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:42.822538   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.799159   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.800027   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.299156   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.693317   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.194419   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:45.322637   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:47.820481   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.300259   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.798429   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.693209   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.693611   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:50.322017   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.820364   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:54.820692   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.799942   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.301665   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.694058   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.194171   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:56.821255   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:58.828524   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.303516   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.791851   65211 pod_ready.go:81] duration metric: took 4m0.000068811s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	E0318 22:02:57.791889   65211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:02:57.791913   65211 pod_ready.go:38] duration metric: took 4m13.55705031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:02:57.791938   65211 kubeadm.go:591] duration metric: took 4m20.862001116s to restartPrimaryControlPlane
	W0318 22:02:57.792000   65211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:02:57.792027   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:02:57.692975   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:59.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:01.321045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:03.322008   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.694585   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:04.195750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:05.322087   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:07.820940   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:09.822652   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:06.693803   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:08.693856   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:10.694375   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:12.321504   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:14.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:13.192173   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:15.193816   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:16.822327   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.322059   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:17.691761   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.691867   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.322674   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.823374   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.692710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.695045   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.192838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.322370   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:28.820807   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.165008   65211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.372946393s)
	I0318 22:03:30.165087   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:30.184259   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:30.198417   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:30.210595   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:30.210624   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:30.210675   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:30.222159   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:30.222210   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:30.234099   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:30.244546   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:30.244621   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:30.255192   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.265777   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:30.265833   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.276674   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:30.286349   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:30.286402   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:30.296530   65211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:30.522414   65211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:03:28.193120   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.194300   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:31.321986   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:33.823045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:32.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:34.693824   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.294937   65211 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:03:39.295015   65211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:39.295142   65211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:39.295296   65211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:39.295451   65211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:39.295550   65211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:39.297047   65211 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:39.297135   65211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:39.297250   65211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:39.297368   65211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:39.297461   65211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:39.297557   65211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:39.297640   65211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:39.297742   65211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:39.297831   65211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:39.297939   65211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:39.298032   65211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:39.298084   65211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:39.298206   65211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:39.298301   65211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:39.298376   65211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:39.298451   65211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:39.298518   65211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:39.298612   65211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:39.298693   65211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:39.299829   65211 out.go:204]   - Booting up control plane ...
	I0318 22:03:39.299959   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:39.300052   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:39.300150   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:39.300308   65211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:39.300444   65211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:39.300496   65211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:39.300713   65211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:39.300829   65211 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003359 seconds
	I0318 22:03:39.300997   65211 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:03:39.301155   65211 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:03:39.301228   65211 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:03:39.301451   65211 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-141758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:03:39.301526   65211 kubeadm.go:309] [bootstrap-token] Using token: p114v6.erax4pf5xkn6x2it
	I0318 22:03:39.302903   65211 out.go:204]   - Configuring RBAC rules ...
	I0318 22:03:39.303025   65211 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:03:39.303133   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:03:39.303301   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:03:39.303479   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:03:39.303574   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:03:39.303651   65211 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:03:39.303810   65211 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:03:39.303886   65211 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:03:39.303960   65211 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:03:39.303972   65211 kubeadm.go:309] 
	I0318 22:03:39.304041   65211 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:03:39.304050   65211 kubeadm.go:309] 
	I0318 22:03:39.304158   65211 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:03:39.304173   65211 kubeadm.go:309] 
	I0318 22:03:39.304208   65211 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:03:39.304292   65211 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:03:39.304368   65211 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:03:39.304377   65211 kubeadm.go:309] 
	I0318 22:03:39.304456   65211 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:03:39.304465   65211 kubeadm.go:309] 
	I0318 22:03:39.304547   65211 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:03:39.304570   65211 kubeadm.go:309] 
	I0318 22:03:39.304649   65211 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:03:39.304754   65211 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:03:39.304861   65211 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:03:39.304878   65211 kubeadm.go:309] 
	I0318 22:03:39.305028   65211 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:03:39.305134   65211 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:03:39.305144   65211 kubeadm.go:309] 
	I0318 22:03:39.305248   65211 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305390   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:03:39.305422   65211 kubeadm.go:309] 	--control-plane 
	I0318 22:03:39.305430   65211 kubeadm.go:309] 
	I0318 22:03:39.305545   65211 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:03:39.305556   65211 kubeadm.go:309] 
	I0318 22:03:39.305676   65211 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305843   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:03:39.305859   65211 cni.go:84] Creating CNI manager for ""
	I0318 22:03:39.305873   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:03:39.307416   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:03:36.323956   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:38.821180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.308819   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:03:39.375416   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:03:39.434235   65211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:03:39.434303   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.434360   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-141758 minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=embed-certs-141758 minikube.k8s.io/primary=true
	I0318 22:03:39.677778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.708540   65211 ops.go:34] apiserver oom_adj: -16
	I0318 22:03:40.178803   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:40.678832   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:37.193451   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.193667   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:40.821359   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.323788   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:41.678334   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.177921   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.678115   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.178034   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.678655   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.177993   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.678581   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.177929   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.678124   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:46.178423   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.693587   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.693965   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.195060   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:45.821472   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:47.822362   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.678288   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.178394   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.678824   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.678144   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.178090   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.678295   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.178829   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.677856   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:51.177778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.197085   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:50.693056   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:51.192418   65699 pod_ready.go:81] duration metric: took 4m0.006727095s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:51.192452   65699 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0318 22:03:51.192462   65699 pod_ready.go:38] duration metric: took 4m5.551753918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:51.192480   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:51.192514   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:51.192574   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:51.248553   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:51.248575   65699 cri.go:89] found id: ""
	I0318 22:03:51.248583   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:51.248634   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.254205   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:51.254270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:51.303508   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.303534   65699 cri.go:89] found id: ""
	I0318 22:03:51.303543   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:51.303600   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.310160   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:51.310212   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:51.357409   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:51.357429   65699 cri.go:89] found id: ""
	I0318 22:03:51.357436   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:51.357480   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.362683   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:51.362744   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:51.413520   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.413550   65699 cri.go:89] found id: ""
	I0318 22:03:51.413560   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:51.413619   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.419412   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:51.419483   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:51.468338   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:51.468365   65699 cri.go:89] found id: ""
	I0318 22:03:51.468374   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:51.468432   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.474006   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:51.474070   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:51.520166   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:51.520188   65699 cri.go:89] found id: ""
	I0318 22:03:51.520195   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:51.520246   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.526087   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:51.526148   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:51.570735   65699 cri.go:89] found id: ""
	I0318 22:03:51.570761   65699 logs.go:276] 0 containers: []
	W0318 22:03:51.570772   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:51.570779   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:51.570832   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:51.678380   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.178543   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.677807   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.814739   65211 kubeadm.go:1107] duration metric: took 13.380493852s to wait for elevateKubeSystemPrivileges
	W0318 22:03:52.814773   65211 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:03:52.814782   65211 kubeadm.go:393] duration metric: took 5m15.94869953s to StartCluster
	I0318 22:03:52.814803   65211 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.814883   65211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:03:52.816928   65211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.817192   65211 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:03:52.818800   65211 out.go:177] * Verifying Kubernetes components...
	I0318 22:03:52.817486   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:03:52.817499   65211 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:03:52.820175   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:03:52.818838   65211 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-141758"
	I0318 22:03:52.820277   65211 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-141758"
	W0318 22:03:52.820288   65211 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:03:52.818844   65211 addons.go:69] Setting metrics-server=true in profile "embed-certs-141758"
	I0318 22:03:52.820369   65211 addons.go:234] Setting addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:52.818848   65211 addons.go:69] Setting default-storageclass=true in profile "embed-certs-141758"
	I0318 22:03:52.820429   65211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-141758"
	I0318 22:03:52.820317   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	W0318 22:03:52.820386   65211 addons.go:243] addon metrics-server should already be in state true
	I0318 22:03:52.820697   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.820821   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820846   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.820872   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820899   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.821079   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.821107   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.839829   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0318 22:03:52.839850   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0318 22:03:52.839992   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840448   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840986   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841010   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841124   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841144   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841148   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841162   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841385   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841428   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841557   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841639   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.842001   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842043   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.842049   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842068   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.845295   65211 addons.go:234] Setting addon default-storageclass=true in "embed-certs-141758"
	W0318 22:03:52.845315   65211 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:03:52.845343   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.845692   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.845736   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.864111   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0318 22:03:52.864141   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0318 22:03:52.864614   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.864688   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.865181   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865199   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865318   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865334   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865556   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866107   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.866147   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.866343   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866630   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.868253   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.870076   65211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:03:52.871315   65211 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:52.871333   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:03:52.871352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.873922   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 22:03:52.874420   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.874924   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.874944   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.875080   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.875194   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.875254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.875346   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.875478   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.875718   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.875733   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.876060   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.876234   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.877582   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.879040   65211 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:03:50.320724   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.321791   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:54.821845   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.880124   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:03:52.880135   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:03:52.880152   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.882530   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.882957   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.882979   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.883230   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.883371   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.883507   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.883638   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.886181   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0318 22:03:52.886563   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.887043   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.887064   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.887416   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.887599   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.888998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.889490   65211 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:52.889504   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:03:52.889519   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.891985   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892380   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.892435   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.892776   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.892949   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.893066   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:53.047557   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:03:53.098470   65211 node_ready.go:35] waiting up to 6m0s for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111074   65211 node_ready.go:49] node "embed-certs-141758" has status "Ready":"True"
	I0318 22:03:53.111093   65211 node_ready.go:38] duration metric: took 12.593803ms for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111102   65211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:53.127297   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:53.167460   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:03:53.167476   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:03:53.199789   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:53.221070   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:53.233431   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:03:53.233452   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:03:53.298339   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:53.298368   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:03:53.415046   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:55.057164   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85734001s)
	I0318 22:03:55.057233   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057252   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.057590   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057601   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.057614   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057634   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057888   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057929   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.064097   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.064111   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.064376   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.064402   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.064418   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.138948   65211 pod_ready.go:92] pod "coredns-5dd5756b68-k675p" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.138968   65211 pod_ready.go:81] duration metric: took 2.011647544s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.138976   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150187   65211 pod_ready.go:92] pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.150204   65211 pod_ready.go:81] duration metric: took 11.222328ms for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150213   65211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157054   65211 pod_ready.go:92] pod "etcd-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.157073   65211 pod_ready.go:81] duration metric: took 6.853876ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157086   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.167962   65211 pod_ready.go:92] pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.167986   65211 pod_ready.go:81] duration metric: took 10.892042ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.168000   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177187   65211 pod_ready.go:92] pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.177204   65211 pod_ready.go:81] duration metric: took 9.197593ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177213   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.515883   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.294780085s)
	I0318 22:03:55.515937   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.515948   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.515952   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.100869127s)
	I0318 22:03:55.515994   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516014   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516301   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516378   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516469   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516481   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516491   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516406   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516451   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516665   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516683   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516691   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516772   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516839   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516867   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519334   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.519340   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.519355   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519365   65211 addons.go:470] Verifying addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:55.520941   65211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 22:03:55.522318   65211 addons.go:505] duration metric: took 2.704813533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 22:03:55.545590   65211 pod_ready.go:92] pod "kube-proxy-jltc7" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.545614   65211 pod_ready.go:81] duration metric: took 368.395697ms for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.545625   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932726   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.932750   65211 pod_ready.go:81] duration metric: took 387.117475ms for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932757   65211 pod_ready.go:38] duration metric: took 2.821645915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:55.932771   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:55.932815   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.969924   65211 api_server.go:72] duration metric: took 3.152691986s to wait for apiserver process to appear ...
	I0318 22:03:55.969955   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.969977   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 22:03:55.976004   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 22:03:55.977450   65211 api_server.go:141] control plane version: v1.28.4
	I0318 22:03:55.977489   65211 api_server.go:131] duration metric: took 7.525909ms to wait for apiserver health ...
	I0318 22:03:55.977499   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:56.138403   65211 system_pods.go:59] 9 kube-system pods found
	I0318 22:03:56.138429   65211 system_pods.go:61] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.138434   65211 system_pods.go:61] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.138438   65211 system_pods.go:61] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.138441   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.138444   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.138448   65211 system_pods.go:61] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.138453   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.138462   65211 system_pods.go:61] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.138519   65211 system_pods.go:61] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:03:56.138532   65211 system_pods.go:74] duration metric: took 161.01924ms to wait for pod list to return data ...
	I0318 22:03:56.138544   65211 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:03:56.331884   65211 default_sa.go:45] found service account: "default"
	I0318 22:03:56.331926   65211 default_sa.go:55] duration metric: took 193.36174ms for default service account to be created ...
	I0318 22:03:56.331937   65211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:03:56.536411   65211 system_pods.go:86] 9 kube-system pods found
	I0318 22:03:56.536443   65211 system_pods.go:89] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.536452   65211 system_pods.go:89] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.536459   65211 system_pods.go:89] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.536466   65211 system_pods.go:89] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.536472   65211 system_pods.go:89] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.536479   65211 system_pods.go:89] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.536486   65211 system_pods.go:89] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.536497   65211 system_pods.go:89] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.536507   65211 system_pods.go:89] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Running
	I0318 22:03:56.536518   65211 system_pods.go:126] duration metric: took 204.57366ms to wait for k8s-apps to be running ...
	I0318 22:03:56.536531   65211 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:03:56.536579   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:56.557315   65211 system_svc.go:56] duration metric: took 20.775851ms WaitForService to wait for kubelet
	I0318 22:03:56.557344   65211 kubeadm.go:576] duration metric: took 3.740121987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:03:56.557375   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:03:51.614216   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:51.614235   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.614239   65699 cri.go:89] found id: ""
	I0318 22:03:51.614245   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:51.614297   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.619100   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.623808   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:51.623827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:51.780027   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:51.780067   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.842134   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:51.842167   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.889769   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:51.889797   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.942502   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:51.942543   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:52.467986   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:52.468043   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:52.518980   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:52.519023   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:52.536546   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:52.536586   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:52.591854   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:52.591894   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:52.640783   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:52.640818   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:52.687934   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:52.687967   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:52.749690   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:52.749726   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:52.807019   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:52.807064   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:55.392930   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.415406   65699 api_server.go:72] duration metric: took 4m15.533409678s to wait for apiserver process to appear ...
	I0318 22:03:55.415435   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.415472   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:55.415523   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:55.474200   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:55.474227   65699 cri.go:89] found id: ""
	I0318 22:03:55.474237   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:55.474295   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.479787   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:55.479907   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:55.532114   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:55.532136   65699 cri.go:89] found id: ""
	I0318 22:03:55.532145   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:55.532202   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.537215   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:55.537270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:55.588633   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:55.588657   65699 cri.go:89] found id: ""
	I0318 22:03:55.588666   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:55.588723   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.595711   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:55.595777   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:55.646684   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:55.646704   65699 cri.go:89] found id: ""
	I0318 22:03:55.646714   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:55.646770   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.651920   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:55.651982   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:55.694948   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:55.694975   65699 cri.go:89] found id: ""
	I0318 22:03:55.694984   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:55.695035   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.700275   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:55.700343   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:55.740536   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:55.740559   65699 cri.go:89] found id: ""
	I0318 22:03:55.740568   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:55.740618   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.745384   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:55.745446   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:55.784614   65699 cri.go:89] found id: ""
	I0318 22:03:55.784645   65699 logs.go:276] 0 containers: []
	W0318 22:03:55.784657   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:55.784664   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:55.784727   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:55.827306   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:55.827334   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:55.827341   65699 cri.go:89] found id: ""
	I0318 22:03:55.827349   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:55.827404   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.832314   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.838497   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:55.838520   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:55.857285   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:55.857319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:55.984597   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:55.984629   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:56.044283   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:56.044339   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:56.100329   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:56.100363   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:56.173231   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:56.173270   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:56.221280   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:56.221310   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:56.274110   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:56.274138   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:56.332863   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:56.332891   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:56.374289   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:56.374317   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:56.423793   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:56.423827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:56.478696   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:56.478734   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:56.518600   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:56.518627   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:56.731788   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:03:56.731810   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 22:03:56.731823   65211 node_conditions.go:105] duration metric: took 174.442649ms to run NodePressure ...
	I0318 22:03:56.731835   65211 start.go:240] waiting for startup goroutines ...
	I0318 22:03:56.731845   65211 start.go:245] waiting for cluster config update ...
	I0318 22:03:56.731857   65211 start.go:254] writing updated cluster config ...
	I0318 22:03:56.732109   65211 ssh_runner.go:195] Run: rm -f paused
	I0318 22:03:56.778660   65211 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:03:56.780431   65211 out.go:177] * Done! kubectl is now configured to use "embed-certs-141758" cluster and "default" namespace by default
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:56.814631   65170 pod_ready.go:81] duration metric: took 4m0.000725499s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:56.814661   65170 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:03:56.814684   65170 pod_ready.go:38] duration metric: took 4m11.531709977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:56.814712   65170 kubeadm.go:591] duration metric: took 4m19.482098142s to restartPrimaryControlPlane
	W0318 22:03:56.814767   65170 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:56.814797   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:59.480665   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 22:03:59.485792   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 22:03:59.487343   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:03:59.487364   65699 api_server.go:131] duration metric: took 4.071921663s to wait for apiserver health ...
	I0318 22:03:59.487375   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:59.487406   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:59.487462   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:59.540845   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:59.540872   65699 cri.go:89] found id: ""
	I0318 22:03:59.540881   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:59.540958   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.547759   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:59.547824   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:59.593015   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:59.593042   65699 cri.go:89] found id: ""
	I0318 22:03:59.593051   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:59.593106   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.598169   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:59.598233   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:59.638484   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:59.638508   65699 cri.go:89] found id: ""
	I0318 22:03:59.638517   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:59.638575   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.643353   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:59.643416   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:59.687190   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:59.687208   65699 cri.go:89] found id: ""
	I0318 22:03:59.687216   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:59.687271   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.692481   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:59.692550   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:59.735798   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:59.735824   65699 cri.go:89] found id: ""
	I0318 22:03:59.735834   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:59.735893   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.742192   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:59.742263   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:59.782961   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:59.782989   65699 cri.go:89] found id: ""
	I0318 22:03:59.783000   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:59.783060   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.788247   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:59.788325   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:59.836955   65699 cri.go:89] found id: ""
	I0318 22:03:59.836983   65699 logs.go:276] 0 containers: []
	W0318 22:03:59.836992   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:59.836998   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:59.837052   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:59.879225   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:59.879250   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:59.879255   65699 cri.go:89] found id: ""
	I0318 22:03:59.879264   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:59.879323   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.884380   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.889289   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:59.889316   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:04:00.307344   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:04:00.307389   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:04:00.325472   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:04:00.325496   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:04:00.388254   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:04:00.388288   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:04:00.430203   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:04:00.430241   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:04:00.476834   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:04:00.476861   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:04:00.532672   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:04:00.532703   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:04:00.572174   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:04:00.572202   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:04:00.624250   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:04:00.624283   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:04:00.688520   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:04:00.688551   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:04:00.764279   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:04:00.764319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:04:00.903231   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:04:00.903262   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:04:00.974836   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:04:00.974869   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:04:03.547135   65699 system_pods.go:59] 8 kube-system pods found
	I0318 22:04:03.547166   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.547172   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.547180   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.547186   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.547193   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.547198   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.547208   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.547214   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.547224   65699 system_pods.go:74] duration metric: took 4.059842092s to wait for pod list to return data ...
	I0318 22:04:03.547233   65699 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:03.554656   65699 default_sa.go:45] found service account: "default"
	I0318 22:04:03.554682   65699 default_sa.go:55] duration metric: took 7.437557ms for default service account to be created ...
	I0318 22:04:03.554692   65699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:03.562342   65699 system_pods.go:86] 8 kube-system pods found
	I0318 22:04:03.562369   65699 system_pods.go:89] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.562374   65699 system_pods.go:89] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.562378   65699 system_pods.go:89] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.562383   65699 system_pods.go:89] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.562387   65699 system_pods.go:89] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.562391   65699 system_pods.go:89] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.562397   65699 system_pods.go:89] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.562402   65699 system_pods.go:89] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.562410   65699 system_pods.go:126] duration metric: took 7.712357ms to wait for k8s-apps to be running ...
	I0318 22:04:03.562424   65699 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:03.562470   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:03.579949   65699 system_svc.go:56] duration metric: took 17.517801ms WaitForService to wait for kubelet
	I0318 22:04:03.579977   65699 kubeadm.go:576] duration metric: took 4m23.697982351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:03.579993   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:03.585009   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:03.585037   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:03.585049   65699 node_conditions.go:105] duration metric: took 5.050614ms to run NodePressure ...
	I0318 22:04:03.585063   65699 start.go:240] waiting for startup goroutines ...
	I0318 22:04:03.585075   65699 start.go:245] waiting for cluster config update ...
	I0318 22:04:03.585089   65699 start.go:254] writing updated cluster config ...
	I0318 22:04:03.585426   65699 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:03.634969   65699 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:04:03.637561   65699 out.go:177] * Done! kubectl is now configured to use "no-preload-963041" cluster and "default" namespace by default
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:29.143869   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.329052492s)
	I0318 22:04:29.143935   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:29.161708   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:04:29.173738   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:04:29.185221   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:04:29.185241   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 22:04:29.185273   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 22:04:29.196326   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:04:29.196382   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:04:29.207305   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 22:04:29.217759   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:04:29.217811   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:04:29.228350   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.239148   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:04:29.239191   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.251191   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 22:04:29.262291   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:04:29.262339   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:04:29.273343   65170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:04:29.332561   65170 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:04:29.333329   65170 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:04:29.496432   65170 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:04:29.496558   65170 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:04:29.496720   65170 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:04:29.728202   65170 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:04:29.730047   65170 out.go:204]   - Generating certificates and keys ...
	I0318 22:04:29.730126   65170 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:04:29.730202   65170 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:04:29.730297   65170 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:04:29.730669   65170 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:04:29.731209   65170 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:04:29.731887   65170 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:04:29.732569   65170 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:04:29.733362   65170 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:04:29.734045   65170 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:04:29.734477   65170 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:04:29.735264   65170 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:04:29.735340   65170 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:04:30.122363   65170 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:04:30.296021   65170 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:04:30.555774   65170 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:04:30.674403   65170 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:04:30.674943   65170 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:04:30.677509   65170 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:04:30.679219   65170 out.go:204]   - Booting up control plane ...
	I0318 22:04:30.679319   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:04:30.679402   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:04:30.681975   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:04:30.701015   65170 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:04:30.701902   65170 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:04:30.702104   65170 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:04:30.843019   65170 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:04:36.846312   65170 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002976 seconds
	I0318 22:04:36.846520   65170 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:04:36.870892   65170 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:04:37.410373   65170 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:04:37.410649   65170 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-660775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:04:37.935730   65170 kubeadm.go:309] [bootstrap-token] Using token: jwgiie.tp4r5ug6emevtbxj
	I0318 22:04:37.937024   65170 out.go:204]   - Configuring RBAC rules ...
	I0318 22:04:37.937156   65170 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:04:37.943204   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:04:37.951400   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:04:37.958005   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:04:37.962013   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:04:37.965783   65170 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:04:37.985150   65170 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:04:38.241561   65170 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:04:38.355495   65170 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:04:38.356452   65170 kubeadm.go:309] 
	I0318 22:04:38.356511   65170 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:04:38.356520   65170 kubeadm.go:309] 
	I0318 22:04:38.356598   65170 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:04:38.356609   65170 kubeadm.go:309] 
	I0318 22:04:38.356667   65170 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:04:38.356774   65170 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:04:38.356828   65170 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:04:38.356844   65170 kubeadm.go:309] 
	I0318 22:04:38.356898   65170 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:04:38.356916   65170 kubeadm.go:309] 
	I0318 22:04:38.356976   65170 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:04:38.356984   65170 kubeadm.go:309] 
	I0318 22:04:38.357030   65170 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:04:38.357093   65170 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:04:38.357161   65170 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:04:38.357168   65170 kubeadm.go:309] 
	I0318 22:04:38.357263   65170 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:04:38.357364   65170 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:04:38.357376   65170 kubeadm.go:309] 
	I0318 22:04:38.357495   65170 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.357657   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:04:38.357707   65170 kubeadm.go:309] 	--control-plane 
	I0318 22:04:38.357724   65170 kubeadm.go:309] 
	I0318 22:04:38.357861   65170 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:04:38.357873   65170 kubeadm.go:309] 
	I0318 22:04:38.357986   65170 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.358144   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:04:38.358726   65170 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:38.358772   65170 cni.go:84] Creating CNI manager for ""
	I0318 22:04:38.358789   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:04:38.360246   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:04:38.361264   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:04:38.378420   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:04:38.482111   65170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:04:38.482178   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:38.482194   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-660775 minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=default-k8s-diff-port-660775 minikube.k8s.io/primary=true
	I0318 22:04:38.617420   65170 ops.go:34] apiserver oom_adj: -16
	I0318 22:04:38.828087   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.328292   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.828411   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.328829   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.828338   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.329118   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.828239   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.328296   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.828241   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.329151   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.829036   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.328224   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.828465   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.328632   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.828289   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.328321   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.828493   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.329008   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.828789   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.328727   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.829024   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.329010   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.828311   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.328474   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.445593   65170 kubeadm.go:1107] duration metric: took 11.963480655s to wait for elevateKubeSystemPrivileges
	W0318 22:04:50.445640   65170 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:04:50.445651   65170 kubeadm.go:393] duration metric: took 5m13.168616417s to StartCluster
	I0318 22:04:50.445672   65170 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.445754   65170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:04:50.447789   65170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.448086   65170 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:04:50.449989   65170 out.go:177] * Verifying Kubernetes components...
	I0318 22:04:50.448238   65170 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:04:50.450030   65170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450044   65170 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450068   65170 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-660775"
	I0318 22:04:50.450070   65170 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.450078   65170 addons.go:243] addon storage-provisioner should already be in state true
	W0318 22:04:50.450082   65170 addons.go:243] addon metrics-server should already be in state true
	I0318 22:04:50.450105   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450116   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450033   65170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450181   65170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-660775"
	I0318 22:04:50.450493   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450516   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450585   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450628   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.448310   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:04:50.452465   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:04:50.466764   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0318 22:04:50.468214   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0318 22:04:50.468460   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.468676   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469019   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469038   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469182   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469195   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469254   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0318 22:04:50.469549   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.469605   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469603   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470035   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.470053   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.470320   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470350   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470381   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470385   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470395   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470535   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.473854   65170 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.473879   65170 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:04:50.473907   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.474268   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.474301   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.485707   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0318 22:04:50.486097   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0318 22:04:50.486278   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486675   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486809   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.486818   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487074   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.487086   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487345   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487513   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487561   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.487759   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.489284   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.491084   65170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:04:50.489730   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.492156   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0318 22:04:50.492539   65170 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.492549   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:04:50.492563   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.494057   65170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:04:50.492998   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.495232   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:04:50.495253   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:04:50.495275   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.495863   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.495887   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.495952   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.496340   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496476   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.496620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.496757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.496861   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.497350   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.498004   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.498047   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.498450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.499027   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499235   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.499406   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.499565   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.499691   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.515126   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0318 22:04:50.515913   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.516473   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.516498   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.516800   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.517008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.518559   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.518811   65170 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.518825   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:04:50.518842   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.522625   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523156   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.523537   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523810   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.523984   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.524193   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.524430   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.682066   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:04:50.699269   65170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709309   65170 node_ready.go:49] node "default-k8s-diff-port-660775" has status "Ready":"True"
	I0318 22:04:50.709330   65170 node_ready.go:38] duration metric: took 10.026001ms for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709342   65170 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.713958   65170 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720434   65170 pod_ready.go:92] pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.720459   65170 pod_ready.go:81] duration metric: took 6.477329ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720471   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725799   65170 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.725820   65170 pod_ready.go:81] duration metric: took 5.341405ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725829   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.730987   65170 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.731006   65170 pod_ready.go:81] duration metric: took 5.171376ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.731016   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737458   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.737481   65170 pod_ready.go:81] duration metric: took 6.458242ms for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737490   65170 pod_ready.go:38] duration metric: took 28.137606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.737506   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:04:50.737560   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:04:50.757770   65170 api_server.go:72] duration metric: took 309.622189ms to wait for apiserver process to appear ...
	I0318 22:04:50.757795   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:04:50.757815   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 22:04:50.765732   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 22:04:50.769202   65170 api_server.go:141] control plane version: v1.28.4
	I0318 22:04:50.769228   65170 api_server.go:131] duration metric: took 11.424563ms to wait for apiserver health ...
	I0318 22:04:50.769238   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:04:50.831223   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.859994   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:04:50.860014   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:04:50.864994   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.905212   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:04:50.905257   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:04:50.918389   65170 system_pods.go:59] 4 kube-system pods found
	I0318 22:04:50.918416   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:50.918422   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:50.918426   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:50.918429   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:50.918435   65170 system_pods.go:74] duration metric: took 149.190745ms to wait for pod list to return data ...
	I0318 22:04:50.918442   65170 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:50.993150   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:50.993174   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:04:51.056974   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:51.124585   65170 default_sa.go:45] found service account: "default"
	I0318 22:04:51.124612   65170 default_sa.go:55] duration metric: took 206.163161ms for default service account to be created ...
	I0318 22:04:51.124624   65170 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:51.347373   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.347408   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347419   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347426   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.347433   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.347440   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.347452   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.347458   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.347478   65170 retry.go:31] will retry after 201.830143ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.556559   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.556594   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556605   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556621   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.556630   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.556638   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.556648   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.556663   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.556681   65170 retry.go:31] will retry after 312.139871ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.878515   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.878546   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878554   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878562   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.878568   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.878573   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.878579   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.878582   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.878596   65170 retry.go:31] will retry after 379.864885ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.364944   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.364971   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364979   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364987   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.364995   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.365002   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.365011   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:52.365018   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.365039   65170 retry.go:31] will retry after 598.040475ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.752856   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921596456s)
	I0318 22:04:52.752915   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.752928   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753278   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753303   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.753314   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.753323   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753565   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753580   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.781081   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.781102   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.781396   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.781417   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.973228   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.973256   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Running
	I0318 22:04:52.973262   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Running
	I0318 22:04:52.973269   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.973275   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.973282   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.973289   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Running
	I0318 22:04:52.973295   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.973304   65170 system_pods.go:126] duration metric: took 1.848673952s to wait for k8s-apps to be running ...
	I0318 22:04:52.973310   65170 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:52.973361   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:53.343164   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.286142485s)
	I0318 22:04:53.343193   65170 system_svc.go:56] duration metric: took 369.874916ms WaitForService to wait for kubelet
	I0318 22:04:53.343215   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343229   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343216   65170 kubeadm.go:576] duration metric: took 2.89507195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:53.343238   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:53.343265   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478242665s)
	I0318 22:04:53.343301   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343311   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343510   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.343555   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.343564   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.343572   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343580   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345065   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345078   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345082   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345065   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345094   65170 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-660775"
	I0318 22:04:53.345094   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345117   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345127   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.345136   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345401   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345400   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345419   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.347668   65170 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0318 22:04:53.348839   65170 addons.go:505] duration metric: took 2.900603006s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0318 22:04:53.363245   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:53.363274   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:53.363307   65170 node_conditions.go:105] duration metric: took 20.053581ms to run NodePressure ...
	I0318 22:04:53.363325   65170 start.go:240] waiting for startup goroutines ...
	I0318 22:04:53.363339   65170 start.go:245] waiting for cluster config update ...
	I0318 22:04:53.363353   65170 start.go:254] writing updated cluster config ...
	I0318 22:04:53.363674   65170 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:53.429018   65170 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:04:53.430584   65170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-660775" cluster and "default" namespace by default
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.600942909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800035600921741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf9b5fd3-78f6-4dc5-baf3-c682e56bfef0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.601561895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ac92821-90f8-415e-b4bf-880d1441f18e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.601642397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ac92821-90f8-415e-b4bf-880d1441f18e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.601830561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ac92821-90f8-415e-b4bf-880d1441f18e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.645945943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e33a971-eb59-4c61-8997-ef3af9e6f76f name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.646049997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e33a971-eb59-4c61-8997-ef3af9e6f76f name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.647683104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f746a4a-b085-46d9-b3d4-75fb8828c450 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.648052993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800035648033108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f746a4a-b085-46d9-b3d4-75fb8828c450 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.649006347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7b97c1b-07e5-49eb-8ce6-8477911e9e8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.649088701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7b97c1b-07e5-49eb-8ce6-8477911e9e8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.649350785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7b97c1b-07e5-49eb-8ce6-8477911e9e8a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.690900704Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdb48114-d21f-4943-8c2a-8a0190aa941e name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.690974086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdb48114-d21f-4943-8c2a-8a0190aa941e name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.692008390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2211262-911e-400f-8089-d353c9f1600a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.692591686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800035692567433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2211262-911e-400f-8089-d353c9f1600a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.693517747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3fbb5fb-ecdf-4c7d-ac2c-9ff983e9b402 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.693573423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3fbb5fb-ecdf-4c7d-ac2c-9ff983e9b402 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.693758557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3fbb5fb-ecdf-4c7d-ac2c-9ff983e9b402 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.729461787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9933279e-fe53-49ef-a908-7461f0d673d6 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.729534226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9933279e-fe53-49ef-a908-7461f0d673d6 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.730534341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5a7a1db-8394-4244-abc4-21044fc20daf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.730979945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800035730955627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5a7a1db-8394-4244-abc4-21044fc20daf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.731682433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4662ac49-1b94-4092-a290-a53d4e759757 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.731732565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4662ac49-1b94-4092-a290-a53d4e759757 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:13:55 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:13:55.731906982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4662ac49-1b94-4092-a290-a53d4e759757 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a2e24a3274d6b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3f55da2b5c157       storage-provisioner
	38adbbaa34644       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   78f8818eee807       coredns-5dd5756b68-vmj4l
	f2c789f4cbe4d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   629ba784158f9       coredns-5dd5756b68-55f9q
	a0d2f16a4b499       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   6d4ae89071009       kube-proxy-z2dsq
	e74044536d4b3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   c3a85fc998173       etcd-default-k8s-diff-port-660775
	e86c29661f633       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   7cdb005a3fdea       kube-scheduler-default-k8s-diff-port-660775
	3e36060a89811       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   9b5c70c76fc3c       kube-apiserver-default-k8s-diff-port-660775
	19ea785e0f2a7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   a614bb3c9d940       kube-controller-manager-default-k8s-diff-port-660775
	
	
	==> coredns [38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-660775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-660775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=default-k8s-diff-port-660775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 22:04:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-660775
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:13:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:10:04 +0000   Mon, 18 Mar 2024 22:04:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:10:04 +0000   Mon, 18 Mar 2024 22:04:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:10:04 +0000   Mon, 18 Mar 2024 22:04:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:10:04 +0000   Mon, 18 Mar 2024 22:04:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.150
	  Hostname:    default-k8s-diff-port-660775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff954e8ff45d4b5f9e4b6dac58acdc14
	  System UUID:                ff954e8f-f45d-4b5f-9e4b-6dac58acdc14
	  Boot ID:                    09e2df0a-9467-437a-ba40-c1638b1ff79b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-55f9q                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-5dd5756b68-vmj4l                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-660775                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-660775             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-660775    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-z2dsq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-660775             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-x2jjj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node default-k8s-diff-port-660775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node default-k8s-diff-port-660775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node default-k8s-diff-port-660775 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m18s  kubelet          Node default-k8s-diff-port-660775 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m8s   kubelet          Node default-k8s-diff-port-660775 status is now: NodeReady
	  Normal  RegisteredNode           9m6s   node-controller  Node default-k8s-diff-port-660775 event: Registered Node default-k8s-diff-port-660775 in Controller
	
	
	==> dmesg <==
	[  +0.049147] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.862607] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.824242] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.765009] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.764150] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.057791] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066275] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.217827] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.137553] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.344970] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +5.875738] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +0.070392] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.399398] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +5.624149] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.027072] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 22:00] kauditd_printk_skb: 2 callbacks suppressed
	[Mar18 22:04] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.889085] systemd-fstab-generator[3426]: Ignoring "noauto" option for root device
	[  +4.854763] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.445049] systemd-fstab-generator[3751]: Ignoring "noauto" option for root device
	[ +12.501156] systemd-fstab-generator[3955]: Ignoring "noauto" option for root device
	[  +0.105442] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 22:05] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59] <==
	{"level":"info","ts":"2024-03-18T22:04:32.71801Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T22:04:32.726733Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d4c80b78635351ab","initial-advertise-peer-urls":["https://192.168.50.150:2380"],"listen-peer-urls":["https://192.168.50.150:2380"],"advertise-client-urls":["https://192.168.50.150:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.150:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T22:04:32.726805Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T22:04:32.724657Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.150:2380"}
	{"level":"info","ts":"2024-03-18T22:04:32.72688Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.150:2380"}
	{"level":"info","ts":"2024-03-18T22:04:32.726559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab switched to configuration voters=(15332517543073239467)"}
	{"level":"info","ts":"2024-03-18T22:04:32.727067Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d8323e35ed60dfee","local-member-id":"d4c80b78635351ab","added-peer-id":"d4c80b78635351ab","added-peer-peer-urls":["https://192.168.50.150:2380"]}
	{"level":"info","ts":"2024-03-18T22:04:33.071315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T22:04:33.071374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T22:04:33.071485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab received MsgPreVoteResp from d4c80b78635351ab at term 1"}
	{"level":"info","ts":"2024-03-18T22:04:33.0715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.071509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab received MsgVoteResp from d4c80b78635351ab at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.07152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab became leader at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.071528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c80b78635351ab elected leader d4c80b78635351ab at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.077449Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d4c80b78635351ab","local-member-attributes":"{Name:default-k8s-diff-port-660775 ClientURLs:[https://192.168.50.150:2379]}","request-path":"/0/members/d4c80b78635351ab/attributes","cluster-id":"d8323e35ed60dfee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T22:04:33.077594Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:04:33.080273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:04:33.080767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.150:2379"}
	{"level":"info","ts":"2024-03-18T22:04:33.080919Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.083858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T22:04:33.084467Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d8323e35ed60dfee","local-member-id":"d4c80b78635351ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.090569Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.090653Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.093271Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T22:04:33.095239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:13:56 up 14 min,  0 users,  load average: 0.23, 0.19, 0.15
	Linux default-k8s-diff-port-660775 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356] <==
	W0318 22:09:36.201278       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:36.201391       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:09:36.201418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:09:36.201347       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:09:36.201539       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:09:36.202537       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:10:35.083897       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:10:36.202307       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:10:36.202362       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:10:36.202370       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:10:36.203599       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:10:36.203701       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:10:36.203712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:11:35.084651       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 22:12:35.084256       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:12:36.202724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:12:36.202834       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:12:36.202860       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:12:36.204125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:12:36.204339       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:12:36.204369       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:13:35.084481       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e] <==
	I0318 22:08:27.517370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="241.245µs"
	E0318 22:08:50.303523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:08:50.770330       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:09:20.309040       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:09:20.779427       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:09:50.317455       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:09:50.787871       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:10:20.327253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:10:20.797615       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:10:50.334453       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:10:50.806625       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 22:10:54.519022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="337.234µs"
	I0318 22:11:08.517652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="106.828µs"
	E0318 22:11:20.340407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:11:20.815322       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:11:50.348160       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:11:50.824375       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:12:20.354104       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:12:20.833439       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:12:50.361099       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:12:50.842884       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:13:20.366361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:13:20.852100       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:13:50.373875       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:13:50.860597       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3] <==
	I0318 22:04:52.904079       1 server_others.go:69] "Using iptables proxy"
	I0318 22:04:52.925320       1 node.go:141] Successfully retrieved node IP: 192.168.50.150
	I0318 22:04:53.057590       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 22:04:53.057616       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 22:04:53.065116       1 server_others.go:152] "Using iptables Proxier"
	I0318 22:04:53.067375       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 22:04:53.067639       1 server.go:846] "Version info" version="v1.28.4"
	I0318 22:04:53.067649       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 22:04:53.075434       1 config.go:188] "Starting service config controller"
	I0318 22:04:53.081427       1 config.go:97] "Starting endpoint slice config controller"
	I0318 22:04:53.081439       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 22:04:53.081565       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 22:04:53.081595       1 config.go:315] "Starting node config controller"
	I0318 22:04:53.081731       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 22:04:53.184339       1 shared_informer.go:318] Caches are synced for node config
	I0318 22:04:53.185945       1 shared_informer.go:318] Caches are synced for service config
	I0318 22:04:53.192134       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f] <==
	W0318 22:04:35.238787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 22:04:35.238823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:35.238872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 22:04:35.238881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 22:04:35.238934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:35.238990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:04:35.239029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:04:35.239096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:35.242391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 22:04:35.242440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 22:04:35.242493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:04:35.242502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:04:36.068710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 22:04:36.068772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:36.118101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 22:04:36.118252       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 22:04:36.123586       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 22:04:36.124027       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 22:04:36.251566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 22:04:36.251621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 22:04:36.336001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:04:36.336054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:04:36.454745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:04:36.454798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 22:04:38.615769       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:11:38 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:11:38.618786    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:11:38 default-k8s-diff-port-660775 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:11:38 default-k8s-diff-port-660775 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:11:38 default-k8s-diff-port-660775 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:11:38 default-k8s-diff-port-660775 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:11:49 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:11:49.499653    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:12:03 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:12:03.499613    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:12:14 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:12:14.499023    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:12:28 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:12:28.499076    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:12:38 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:12:38.620303    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:12:38 default-k8s-diff-port-660775 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:12:38 default-k8s-diff-port-660775 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:12:38 default-k8s-diff-port-660775 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:12:38 default-k8s-diff-port-660775 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:12:43 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:12:43.499810    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:12:56 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:12:56.499927    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:13:07 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:13:07.500356    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:13:21 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:13:21.499117    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:13:34 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:13:34.501301    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:13:38 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:13:38.619339    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:13:38 default-k8s-diff-port-660775 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:13:38 default-k8s-diff-port-660775 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:13:38 default-k8s-diff-port-660775 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:13:38 default-k8s-diff-port-660775 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:13:48 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:13:48.501076    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	
	
	==> storage-provisioner [a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065] <==
	I0318 22:04:53.915540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:04:53.929303       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:04:53.929518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:04:53.940732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:04:53.941635       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ef6aed3-5e93-45a8-b487-ab6fa74c09b5", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-660775_bee4e015-a584-400a-b2fb-771f58fdd9d4 became leader
	I0318 22:04:53.941742       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-660775_bee4e015-a584-400a-b2fb-771f58fdd9d4!
	I0318 22:04:54.042903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-660775_bee4e015-a584-400a-b2fb-771f58fdd9d4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-x2jjj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 describe pod metrics-server-57f55c9bc5-x2jjj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-660775 describe pod metrics-server-57f55c9bc5-x2jjj: exit status 1 (61.579721ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-x2jjj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-660775 describe pod metrics-server-57f55c9bc5-x2jjj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:07:01.613428   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:07:13.470894   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:08:08.306313   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:08:14.653527   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:08:17.205704   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:08:21.832895   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:08:36.515054   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:09:07.940563   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:09:15.040977   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:09:31.349970   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:09:44.879194   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:10:14.158322   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:10:23.236836   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:10:38.087201   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 22:10:38.567529   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:11:51.608532   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:12:13.470499   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:13:21.833028   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:13:26.283328   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:14:07.940318   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:14:15.041790   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:15:14.158006   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:15:23.236429   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:15:38.567530   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (243.886536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-648232" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (237.258594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-648232 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-648232 logs -n 25: (1.602357565s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo cat                              | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:54:36.607114   65699 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:36.607254   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607266   65699 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:36.607272   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607706   65699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:36.608596   65699 out.go:298] Setting JSON to false
	I0318 21:54:36.609468   65699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5821,"bootTime":1710793056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:36.609529   65699 start.go:139] virtualization: kvm guest
	I0318 21:54:36.611401   65699 out.go:177] * [no-preload-963041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:36.612703   65699 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:36.612704   65699 notify.go:220] Checking for updates...
	I0318 21:54:36.613976   65699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:36.615157   65699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:36.616283   65699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:36.617431   65699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:36.618615   65699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:36.620094   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:54:36.620490   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.620537   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.634914   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0318 21:54:36.635251   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.635706   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.635728   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.636019   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.636173   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.636411   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:36.636719   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.636756   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.650608   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0318 21:54:36.650946   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.651358   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.651383   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.651694   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.651832   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.682407   65699 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:36.683826   65699 start.go:297] selected driver: kvm2
	I0318 21:54:36.683837   65699 start.go:901] validating driver "kvm2" against &{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.683941   65699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:36.684624   65699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.684696   65699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:36.699415   65699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:36.699766   65699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:36.699827   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:54:36.699840   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:36.699883   65699 start.go:340] cluster config:
	{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.699984   65699 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.701584   65699 out.go:177] * Starting "no-preload-963041" primary control-plane node in "no-preload-963041" cluster
	I0318 21:54:36.702792   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:54:36.702911   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:54:36.703027   65699 cache.go:107] acquiring lock: {Name:mk20bcc8d34b80cc44c1e33bc5e0ec5cd82ba46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703044   65699 cache.go:107] acquiring lock: {Name:mk299438a86024ea6c96280d8bbe30c1283fa996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703087   65699 cache.go:107] acquiring lock: {Name:mkf5facbc69c16807f75e75a80a4afa3f97a0ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703124   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0318 21:54:36.703127   65699 start.go:360] acquireMachinesLock for no-preload-963041: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:54:36.703141   65699 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 102.209µs
	I0318 21:54:36.703156   65699 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0318 21:54:36.703104   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 21:54:36.703174   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 21:54:36.703172   65699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.262µs
	I0318 21:54:36.703190   65699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703043   65699 cache.go:107] acquiring lock: {Name:mk4c82b4e60b551671fa99921294b8e1f551d382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703189   65699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 104.037µs
	I0318 21:54:36.703209   65699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 21:54:36.703137   65699 cache.go:107] acquiring lock: {Name:mk847ac7ddb8863389782289e61001579ff6ec5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703204   65699 cache.go:107] acquiring lock: {Name:mk1bf8cc3e30a7cf88f25697f1021501ea6ee4ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703243   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 21:54:36.703254   65699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 163.57µs
	I0318 21:54:36.703233   65699 cache.go:107] acquiring lock: {Name:mkf9c9b33c4d1ca54e3364ad39dcd3b10bc50534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703265   65699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703224   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 21:54:36.703282   65699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 247.672µs
	I0318 21:54:36.703293   65699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 21:54:36.703293   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 21:54:36.703293   65699 cache.go:107] acquiring lock: {Name:mkd0bd00e6f69df37097a8ce792bcc8844efbc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703315   65699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.33µs
	I0318 21:54:36.703329   65699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 21:54:36.703363   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 21:54:36.703385   65699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 207.404µs
	I0318 21:54:36.703400   65699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703411   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 21:54:36.703419   65699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 164.5µs
	I0318 21:54:36.703435   65699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703447   65699 cache.go:87] Successfully saved all images to host disk.
	I0318 21:54:40.421098   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:43.493261   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:49.573105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:52.645158   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:58.725124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:01.797077   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:07.877116   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:10.949096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:17.029117   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:20.101131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:26.181141   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:29.253113   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:35.333097   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:38.405132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:44.485208   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:47.557123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:53.637185   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:56.709102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:02.789134   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:05.861146   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:11.941102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:15.013092   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:21.093132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:24.165129   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:30.245127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:33.317151   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:39.397126   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:42.469163   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:48.549145   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:51.621085   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:57.701118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:00.773108   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:06.853105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:09.925096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:16.005131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:19.077111   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:25.157130   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:28.229107   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:34.309152   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:37.381127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:43.461123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:46.533127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:52.613124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:55.685135   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:01.765118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:04.837197   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:07.840986   65211 start.go:364] duration metric: took 4m36.169318619s to acquireMachinesLock for "embed-certs-141758"
	I0318 21:58:07.841046   65211 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:07.841054   65211 fix.go:54] fixHost starting: 
	I0318 21:58:07.841507   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:07.841544   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:07.856544   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0318 21:58:07.856976   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:07.857424   65211 main.go:141] libmachine: Using API Version  1
	I0318 21:58:07.857452   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:07.857783   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:07.857971   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:07.858126   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:58:07.859909   65211 fix.go:112] recreateIfNeeded on embed-certs-141758: state=Stopped err=<nil>
	I0318 21:58:07.859947   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	W0318 21:58:07.860120   65211 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:07.862134   65211 out.go:177] * Restarting existing kvm2 VM for "embed-certs-141758" ...
	I0318 21:58:07.838706   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:07.838746   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839036   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:58:07.839060   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839263   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:58:07.840867   65170 machine.go:97] duration metric: took 4m37.426711052s to provisionDockerMachine
	I0318 21:58:07.840915   65170 fix.go:56] duration metric: took 4m37.446713188s for fixHost
	I0318 21:58:07.840923   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 4m37.446748943s
	W0318 21:58:07.840945   65170 start.go:713] error starting host: provision: host is not running
	W0318 21:58:07.841017   65170 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 21:58:07.841026   65170 start.go:728] Will try again in 5 seconds ...
	I0318 21:58:07.863352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Start
	I0318 21:58:07.863483   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring networks are active...
	I0318 21:58:07.864202   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network default is active
	I0318 21:58:07.864652   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network mk-embed-certs-141758 is active
	I0318 21:58:07.865077   65211 main.go:141] libmachine: (embed-certs-141758) Getting domain xml...
	I0318 21:58:07.865858   65211 main.go:141] libmachine: (embed-certs-141758) Creating domain...
	I0318 21:58:09.026367   65211 main.go:141] libmachine: (embed-certs-141758) Waiting to get IP...
	I0318 21:58:09.027144   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.027524   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.027580   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.027503   66223 retry.go:31] will retry after 260.499882ms: waiting for machine to come up
	I0318 21:58:09.289935   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.290490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.290522   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.290450   66223 retry.go:31] will retry after 328.000758ms: waiting for machine to come up
	I0318 21:58:09.619947   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.620337   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.620384   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.620305   66223 retry.go:31] will retry after 419.640035ms: waiting for machine to come up
	I0318 21:58:10.041775   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.042186   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.042213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.042134   66223 retry.go:31] will retry after 482.732439ms: waiting for machine to come up
	I0318 21:58:10.526892   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.527282   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.527307   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.527253   66223 retry.go:31] will retry after 718.696645ms: waiting for machine to come up
	I0318 21:58:11.247165   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.247545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.247571   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.247501   66223 retry.go:31] will retry after 603.951593ms: waiting for machine to come up
	I0318 21:58:12.842928   65170 start.go:360] acquireMachinesLock for default-k8s-diff-port-660775: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:11.853119   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.853408   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.853438   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.853362   66223 retry.go:31] will retry after 1.191963995s: waiting for machine to come up
	I0318 21:58:13.046915   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:13.047289   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:13.047319   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:13.047237   66223 retry.go:31] will retry after 1.314666633s: waiting for machine to come up
	I0318 21:58:14.363693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:14.364109   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:14.364135   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:14.364064   66223 retry.go:31] will retry after 1.341191632s: waiting for machine to come up
	I0318 21:58:15.707425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:15.707921   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:15.707951   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:15.707862   66223 retry.go:31] will retry after 1.887572842s: waiting for machine to come up
	I0318 21:58:17.596545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:17.596970   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:17.597002   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:17.596899   66223 retry.go:31] will retry after 2.820006704s: waiting for machine to come up
	I0318 21:58:20.420327   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:20.420693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:20.420714   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:20.420659   66223 retry.go:31] will retry after 3.099836206s: waiting for machine to come up
	I0318 21:58:23.522155   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:23.522490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:23.522517   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:23.522450   66223 retry.go:31] will retry after 4.512794132s: waiting for machine to come up
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:28.036425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.036898   65211 main.go:141] libmachine: (embed-certs-141758) Found IP for machine: 192.168.39.243
	I0318 21:58:28.036949   65211 main.go:141] libmachine: (embed-certs-141758) Reserving static IP address...
	I0318 21:58:28.036967   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has current primary IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.037428   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.037452   65211 main.go:141] libmachine: (embed-certs-141758) DBG | skip adding static IP to network mk-embed-certs-141758 - found existing host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"}
	I0318 21:58:28.037461   65211 main.go:141] libmachine: (embed-certs-141758) Reserved static IP address: 192.168.39.243
	I0318 21:58:28.037473   65211 main.go:141] libmachine: (embed-certs-141758) Waiting for SSH to be available...
	I0318 21:58:28.037485   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Getting to WaitForSSH function...
	I0318 21:58:28.039459   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039778   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.039810   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039928   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH client type: external
	I0318 21:58:28.039955   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa (-rw-------)
	I0318 21:58:28.039995   65211 main.go:141] libmachine: (embed-certs-141758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:28.040027   65211 main.go:141] libmachine: (embed-certs-141758) DBG | About to run SSH command:
	I0318 21:58:28.040044   65211 main.go:141] libmachine: (embed-certs-141758) DBG | exit 0
	I0318 21:58:28.169219   65211 main.go:141] libmachine: (embed-certs-141758) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:28.169554   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetConfigRaw
	I0318 21:58:28.170153   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.172372   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.172760   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.172787   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.173016   65211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:58:28.173186   65211 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:28.173203   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:28.173399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.175433   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175767   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.175802   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175920   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.176079   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176389   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.176553   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.176790   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.176805   65211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:28.285370   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:28.285407   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285629   65211 buildroot.go:166] provisioning hostname "embed-certs-141758"
	I0318 21:58:28.285651   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285856   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.288382   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288708   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.288739   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288863   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.289067   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289220   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289361   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.289515   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.289717   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.289735   65211 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-141758 && echo "embed-certs-141758" | sudo tee /etc/hostname
	I0318 21:58:28.420311   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-141758
	
	I0318 21:58:28.420351   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.422864   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.423245   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423431   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.423608   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423759   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423891   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.424044   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.424234   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.424256   65211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-141758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-141758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-141758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:28.549277   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:28.549307   65211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:28.549325   65211 buildroot.go:174] setting up certificates
	I0318 21:58:28.549334   65211 provision.go:84] configureAuth start
	I0318 21:58:28.549343   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.549572   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.551881   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552183   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.552205   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.554341   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554629   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.554656   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554752   65211 provision.go:143] copyHostCerts
	I0318 21:58:28.554812   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:28.554825   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:28.554912   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:28.555020   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:28.555032   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:28.555062   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:28.555145   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:28.555155   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:28.555192   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:28.555259   65211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-141758 san=[127.0.0.1 192.168.39.243 embed-certs-141758 localhost minikube]
	I0318 21:58:28.706111   65211 provision.go:177] copyRemoteCerts
	I0318 21:58:28.706158   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:28.706185   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.708537   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708795   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.708822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.709164   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.709335   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.709446   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:28.796199   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:28.827207   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:58:28.854273   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:28.880505   65211 provision.go:87] duration metric: took 331.161751ms to configureAuth
	I0318 21:58:28.880524   65211 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:28.880716   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:58:28.880801   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.883232   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.883583   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883753   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.883926   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884087   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884186   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.884339   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.884481   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.884496   65211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:29.164330   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:29.164357   65211 machine.go:97] duration metric: took 991.159236ms to provisionDockerMachine
	I0318 21:58:29.164370   65211 start.go:293] postStartSetup for "embed-certs-141758" (driver="kvm2")
	I0318 21:58:29.164381   65211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:29.164434   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.164734   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:29.164758   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.167400   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167696   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.167719   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167867   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.168065   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.168235   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.168352   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.256141   65211 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:29.261086   65211 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:29.261104   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:29.261157   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:29.261229   65211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:29.261309   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:29.271174   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:29.297161   65211 start.go:296] duration metric: took 132.781067ms for postStartSetup
	I0318 21:58:29.297192   65211 fix.go:56] duration metric: took 21.456139061s for fixHost
	I0318 21:58:29.297208   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.299741   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300102   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.300127   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300289   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.300480   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300750   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.300864   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:29.301028   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:29.301039   65211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:29.413842   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799109.363417589
	
	I0318 21:58:29.413869   65211 fix.go:216] guest clock: 1710799109.363417589
	I0318 21:58:29.413876   65211 fix.go:229] Guest: 2024-03-18 21:58:29.363417589 +0000 UTC Remote: 2024-03-18 21:58:29.297195181 +0000 UTC m=+297.765354372 (delta=66.222408ms)
	I0318 21:58:29.413892   65211 fix.go:200] guest clock delta is within tolerance: 66.222408ms
	I0318 21:58:29.413899   65211 start.go:83] releasing machines lock for "embed-certs-141758", held for 21.572869797s
	I0318 21:58:29.413932   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.414191   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:29.416929   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417293   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.417318   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417500   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418019   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418159   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418230   65211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:29.418275   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.418330   65211 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:29.418344   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.420728   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421022   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421053   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421076   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421228   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421413   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421464   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421493   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421593   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.421673   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421749   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.421828   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421960   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.422081   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.502548   65211 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:29.531994   65211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:29.681482   65211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:29.689671   65211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:29.689735   65211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:29.711660   65211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:29.711682   65211 start.go:494] detecting cgroup driver to use...
	I0318 21:58:29.711750   65211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:29.728159   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:29.742409   65211 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:29.742450   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:29.757587   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:29.772218   65211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:29.883164   65211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:30.046773   65211 docker.go:233] disabling docker service ...
	I0318 21:58:30.046845   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:30.065878   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:30.081551   65211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:30.223188   65211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:30.353535   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:30.370291   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:30.391728   65211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:58:30.391789   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.409204   65211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:30.409281   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.426464   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.439964   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.452097   65211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:30.464410   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.475990   65211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.495092   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.506831   65211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:30.517410   65211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:30.517463   65211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:30.532465   65211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:30.543958   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:30.679788   65211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:30.839388   65211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:30.839466   65211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:30.844666   65211 start.go:562] Will wait 60s for crictl version
	I0318 21:58:30.844720   65211 ssh_runner.go:195] Run: which crictl
	I0318 21:58:30.848886   65211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:30.888598   65211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:30.888686   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.921097   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.954037   65211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:58:30.955378   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:30.958352   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.958792   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:30.958822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.959064   65211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:30.963556   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:30.977788   65211 kubeadm.go:877] updating cluster {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:30.977899   65211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:58:30.977949   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:31.018843   65211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:58:31.018926   65211 ssh_runner.go:195] Run: which lz4
	I0318 21:58:31.023589   65211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:31.028416   65211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:31.028445   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:32.944637   65211 crio.go:462] duration metric: took 1.921083293s to copy over tarball
	I0318 21:58:32.944709   65211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:35.696230   65211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751490576s)
	I0318 21:58:35.696261   65211 crio.go:469] duration metric: took 2.751600779s to extract the tarball
	I0318 21:58:35.696271   65211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:35.739467   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:35.794398   65211 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:58:35.794427   65211 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:58:35.794436   65211 kubeadm.go:928] updating node { 192.168.39.243 8443 v1.28.4 crio true true} ...
	I0318 21:58:35.794559   65211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-141758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:35.794625   65211 ssh_runner.go:195] Run: crio config
	I0318 21:58:35.844849   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:35.844877   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:35.844888   65211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:35.844923   65211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-141758 NodeName:embed-certs-141758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:58:35.845069   65211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-141758"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:35.845124   65211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:58:35.856885   65211 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:35.856950   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:35.867990   65211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 21:58:35.887057   65211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:35.909244   65211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 21:58:35.931267   65211 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:35.935793   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:35.950323   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:36.093377   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:36.112548   65211 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758 for IP: 192.168.39.243
	I0318 21:58:36.112575   65211 certs.go:194] generating shared ca certs ...
	I0318 21:58:36.112596   65211 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:36.112766   65211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:36.112813   65211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:36.112822   65211 certs.go:256] generating profile certs ...
	I0318 21:58:36.112943   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/client.key
	I0318 21:58:36.113043   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key.d575a4ae
	I0318 21:58:36.113097   65211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key
	I0318 21:58:36.113263   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:36.113307   65211 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:36.113322   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:36.113359   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:36.113396   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:36.113429   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:36.113536   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:36.114412   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:36.147930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:36.177554   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:36.208374   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:36.243425   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:58:36.276720   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:58:36.317930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:36.345717   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:58:36.371655   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:36.396998   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:36.422750   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:36.448117   65211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:36.466558   65211 ssh_runner.go:195] Run: openssl version
	I0318 21:58:36.472888   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:36.484389   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489534   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489585   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.496045   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:36.507723   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:36.519030   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524214   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524267   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.531109   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:36.543912   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:36.556130   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561330   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561369   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.567883   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:36.579819   65211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:36.824046   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:36.831273   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:36.838571   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:36.845621   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:36.852423   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:36.859433   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:36.866091   65211 kubeadm.go:391] StartCluster: {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:36.866212   65211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:36.866263   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:36.912390   65211 cri.go:89] found id: ""
	I0318 21:58:36.912460   65211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:36.929896   65211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:36.929923   65211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:36.929931   65211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:36.929985   65211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:36.947191   65211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:36.948613   65211 kubeconfig.go:125] found "embed-certs-141758" server: "https://192.168.39.243:8443"
	I0318 21:58:36.951641   65211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:36.966095   65211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.243
	I0318 21:58:36.966135   65211 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:36.966150   65211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:36.966216   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:37.022620   65211 cri.go:89] found id: ""
	I0318 21:58:37.022680   65211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:37.042338   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:37.054534   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:37.054552   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:37.054588   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:37.066099   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:37.066166   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:37.077340   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:37.088158   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:37.088214   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:37.099190   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.110081   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:37.110118   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.121852   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:37.133161   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:37.133215   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:37.144199   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:37.155593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.271593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.921199   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.175721   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.264478   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.377591   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:38.377683   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:38.878031   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.377859   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.417546   65211 api_server.go:72] duration metric: took 1.039957218s to wait for apiserver process to appear ...
	I0318 21:58:39.417576   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:58:39.417599   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:39.418125   65211 api_server.go:269] stopped: https://192.168.39.243:8443/healthz: Get "https://192.168.39.243:8443/healthz": dial tcp 192.168.39.243:8443: connect: connection refused
	I0318 21:58:39.917663   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.450620   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.450656   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.450668   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.489722   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.489755   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.918487   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.924551   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:42.924584   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.418077   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.424938   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:43.424969   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.918053   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.922905   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 21:58:43.931126   65211 api_server.go:141] control plane version: v1.28.4
	I0318 21:58:43.931151   65211 api_server.go:131] duration metric: took 4.513568499s to wait for apiserver health ...
	I0318 21:58:43.931159   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:43.931173   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:43.932876   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:43.934155   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:58:43.948750   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:58:43.978849   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:58:43.991046   65211 system_pods.go:59] 8 kube-system pods found
	I0318 21:58:43.991082   65211 system_pods.go:61] "coredns-5dd5756b68-r9pft" [add358cf-d544-4107-a05f-5e60542ea456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:58:43.991089   65211 system_pods.go:61] "etcd-embed-certs-141758" [31274121-ec65-46b5-bcda-65698c28bd1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:58:43.991095   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [61e4c0db-7a20-4c93-83b3-de4738e82614] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:58:43.991100   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [c2ffe900-4e3a-4c21-ae8f-cd42475207c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:58:43.991105   65211 system_pods.go:61] "kube-proxy-klmnb" [45b0c762-4eaf-4e8a-b321-0d474f61086e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:58:43.991109   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [5aeed9aa-9d98-49c0-bf8a-3998738f6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:58:43.991114   65211 system_pods.go:61] "metrics-server-57f55c9bc5-vt7hj" [949e4c0f-6a76-4141-b30c-f27291873f14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:58:43.991123   65211 system_pods.go:61] "storage-provisioner" [0aca1af6-3221-4698-915b-cabb9da662bf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:58:43.991128   65211 system_pods.go:74] duration metric: took 12.25858ms to wait for pod list to return data ...
	I0318 21:58:43.991136   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:58:43.996109   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:58:43.996135   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 21:58:43.996146   65211 node_conditions.go:105] duration metric: took 5.004614ms to run NodePressure ...
	I0318 21:58:43.996163   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:44.227606   65211 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234823   65211 kubeadm.go:733] kubelet initialised
	I0318 21:58:44.234846   65211 kubeadm.go:734] duration metric: took 7.215375ms waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234854   65211 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:58:44.241197   65211 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.248990   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249008   65211 pod_ready.go:81] duration metric: took 7.784519ms for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.249016   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249022   65211 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.254792   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254820   65211 pod_ready.go:81] duration metric: took 5.788084ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.254833   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254846   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.261248   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261272   65211 pod_ready.go:81] duration metric: took 6.415486ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.261282   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261291   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.383016   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383056   65211 pod_ready.go:81] duration metric: took 121.750871ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.383069   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383078   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784241   65211 pod_ready.go:92] pod "kube-proxy-klmnb" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:44.784264   65211 pod_ready.go:81] duration metric: took 401.177044ms for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784272   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:48.950018   65699 start.go:364] duration metric: took 4m12.246849763s to acquireMachinesLock for "no-preload-963041"
	I0318 21:58:48.950078   65699 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:48.950087   65699 fix.go:54] fixHost starting: 
	I0318 21:58:48.950522   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:48.950556   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:48.966094   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0318 21:58:48.966492   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:48.966970   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:58:48.966994   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:48.967295   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:48.967443   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:58:48.967548   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:58:48.968800   65699 fix.go:112] recreateIfNeeded on no-preload-963041: state=Stopped err=<nil>
	I0318 21:58:48.968835   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	W0318 21:58:48.969105   65699 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:48.970900   65699 out.go:177] * Restarting existing kvm2 VM for "no-preload-963041" ...
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:46.795337   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:49.295100   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.299437   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:48.972407   65699 main.go:141] libmachine: (no-preload-963041) Calling .Start
	I0318 21:58:48.972572   65699 main.go:141] libmachine: (no-preload-963041) Ensuring networks are active...
	I0318 21:58:48.973251   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network default is active
	I0318 21:58:48.973606   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network mk-no-preload-963041 is active
	I0318 21:58:48.973992   65699 main.go:141] libmachine: (no-preload-963041) Getting domain xml...
	I0318 21:58:48.974629   65699 main.go:141] libmachine: (no-preload-963041) Creating domain...
	I0318 21:58:50.190010   65699 main.go:141] libmachine: (no-preload-963041) Waiting to get IP...
	I0318 21:58:50.190750   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.191241   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.191320   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.191220   66466 retry.go:31] will retry after 238.162453ms: waiting for machine to come up
	I0318 21:58:50.430778   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.431262   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.431292   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.431191   66466 retry.go:31] will retry after 318.744541ms: waiting for machine to come up
	I0318 21:58:50.751612   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.752051   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.752086   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.752007   66466 retry.go:31] will retry after 464.29047ms: waiting for machine to come up
	I0318 21:58:51.218462   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.219034   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.219062   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.218983   66466 retry.go:31] will retry after 476.466311ms: waiting for machine to come up
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:53.792483   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.696634   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.697179   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.697208   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.697099   66466 retry.go:31] will retry after 520.896381ms: waiting for machine to come up
	I0318 21:58:52.219861   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:52.220480   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:52.220506   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:52.220414   66466 retry.go:31] will retry after 872.240898ms: waiting for machine to come up
	I0318 21:58:53.094123   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.094547   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.094580   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.094499   66466 retry.go:31] will retry after 757.325359ms: waiting for machine to come up
	I0318 21:58:53.852954   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.853422   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.853453   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.853358   66466 retry.go:31] will retry after 1.459327383s: waiting for machine to come up
	I0318 21:58:55.313969   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:55.314382   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:55.314413   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:55.314328   66466 retry.go:31] will retry after 1.373606235s: waiting for machine to come up
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:57.057776   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:57.791727   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:57.791754   65211 pod_ready.go:81] duration metric: took 13.007474768s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:57.791769   65211 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:59.800074   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:56.689643   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:56.690039   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:56.690064   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:56.690020   66466 retry.go:31] will retry after 1.905319343s: waiting for machine to come up
	I0318 21:58:58.597961   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:58.598470   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:58.598501   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:58.598420   66466 retry.go:31] will retry after 2.720364267s: waiting for machine to come up
	I0318 21:59:01.321901   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:01.322290   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:01.322312   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:01.322254   66466 retry.go:31] will retry after 2.73029124s: waiting for machine to come up
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.299143   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.800475   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.054294   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:04.054715   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:04.054752   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:04.054671   66466 retry.go:31] will retry after 3.148777081s: waiting for machine to come up
	I0318 21:59:08.706453   65170 start.go:364] duration metric: took 55.86344587s to acquireMachinesLock for "default-k8s-diff-port-660775"
	I0318 21:59:08.706504   65170 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:59:08.706515   65170 fix.go:54] fixHost starting: 
	I0318 21:59:08.706934   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:08.706970   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:08.723564   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0318 21:59:08.723935   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:08.724359   65170 main.go:141] libmachine: Using API Version  1
	I0318 21:59:08.724381   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:08.724671   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:08.724874   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:08.725045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:59:08.726635   65170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-660775: state=Stopped err=<nil>
	I0318 21:59:08.726656   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	W0318 21:59:08.726813   65170 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:59:08.728839   65170 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-660775" ...
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.730181   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Start
	I0318 21:59:08.730374   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring networks are active...
	I0318 21:59:08.731140   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network default is active
	I0318 21:59:08.731488   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network mk-default-k8s-diff-port-660775 is active
	I0318 21:59:08.731850   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Getting domain xml...
	I0318 21:59:08.732544   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Creating domain...
	I0318 21:59:10.014924   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting to get IP...
	I0318 21:59:10.015822   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016215   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016299   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.016206   66608 retry.go:31] will retry after 301.369371ms: waiting for machine to come up
	I0318 21:59:07.205807   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206239   65699 main.go:141] libmachine: (no-preload-963041) Found IP for machine: 192.168.72.84
	I0318 21:59:07.206266   65699 main.go:141] libmachine: (no-preload-963041) Reserving static IP address...
	I0318 21:59:07.206281   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has current primary IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206636   65699 main.go:141] libmachine: (no-preload-963041) Reserved static IP address: 192.168.72.84
	I0318 21:59:07.206659   65699 main.go:141] libmachine: (no-preload-963041) Waiting for SSH to be available...
	I0318 21:59:07.206686   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.206711   65699 main.go:141] libmachine: (no-preload-963041) DBG | skip adding static IP to network mk-no-preload-963041 - found existing host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"}
	I0318 21:59:07.206728   65699 main.go:141] libmachine: (no-preload-963041) DBG | Getting to WaitForSSH function...
	I0318 21:59:07.208790   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209157   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.209202   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209306   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH client type: external
	I0318 21:59:07.209331   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa (-rw-------)
	I0318 21:59:07.209367   65699 main.go:141] libmachine: (no-preload-963041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:07.209381   65699 main.go:141] libmachine: (no-preload-963041) DBG | About to run SSH command:
	I0318 21:59:07.209395   65699 main.go:141] libmachine: (no-preload-963041) DBG | exit 0
	I0318 21:59:07.337357   65699 main.go:141] libmachine: (no-preload-963041) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:07.337688   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetConfigRaw
	I0318 21:59:07.338258   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.340609   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.340957   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.340996   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.341213   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:59:07.341396   65699 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:07.341462   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:07.341668   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.343956   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344275   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.344311   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344395   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.344580   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344756   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344891   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.345086   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.345264   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.345276   65699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:07.457491   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:07.457543   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457778   65699 buildroot.go:166] provisioning hostname "no-preload-963041"
	I0318 21:59:07.457802   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457975   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.460729   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461120   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.461145   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461286   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.461480   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461643   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461797   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.461980   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.462179   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.462193   65699 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-963041 && echo "no-preload-963041" | sudo tee /etc/hostname
	I0318 21:59:07.592194   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-963041
	
	I0318 21:59:07.592219   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.594794   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595141   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.595177   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595305   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.595484   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595673   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595836   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.595987   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.596144   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.596160   65699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-963041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-963041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-963041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:07.719593   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:07.719622   65699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:07.719655   65699 buildroot.go:174] setting up certificates
	I0318 21:59:07.719667   65699 provision.go:84] configureAuth start
	I0318 21:59:07.719681   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.719928   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.722544   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.722907   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.722935   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.723095   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.725108   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725391   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.725420   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725522   65699 provision.go:143] copyHostCerts
	I0318 21:59:07.725582   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:07.725595   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:07.725665   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:07.725780   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:07.725792   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:07.725817   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:07.725874   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:07.725881   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:07.725898   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:07.725945   65699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.no-preload-963041 san=[127.0.0.1 192.168.72.84 localhost minikube no-preload-963041]
	I0318 21:59:07.893632   65699 provision.go:177] copyRemoteCerts
	I0318 21:59:07.893685   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:07.893711   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.896227   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896501   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.896527   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896692   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.896859   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.897035   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.897205   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:07.983501   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:08.014432   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:59:08.043755   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:59:08.074388   65699 provision.go:87] duration metric: took 354.707214ms to configureAuth
	I0318 21:59:08.074413   65699 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:08.074571   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:08.074638   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.077314   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077658   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.077690   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077837   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.077996   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078150   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078289   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.078435   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.078582   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.078596   65699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:08.446711   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:08.446745   65699 machine.go:97] duration metric: took 1.105332987s to provisionDockerMachine
	I0318 21:59:08.446757   65699 start.go:293] postStartSetup for "no-preload-963041" (driver="kvm2")
	I0318 21:59:08.446772   65699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:08.446787   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.447090   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:08.447118   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.449551   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.449917   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.449955   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.450117   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.450308   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.450471   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.450611   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.542283   65699 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:08.547389   65699 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:08.547423   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:08.547501   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:08.547606   65699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:08.547732   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:08.558721   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:08.586136   65699 start.go:296] duration metric: took 139.367706ms for postStartSetup
	I0318 21:59:08.586177   65699 fix.go:56] duration metric: took 19.636089577s for fixHost
	I0318 21:59:08.586201   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.588809   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589192   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.589219   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589435   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.589604   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589731   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589838   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.589972   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.590182   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.590197   65699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:08.706260   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799148.650279332
	
	I0318 21:59:08.706283   65699 fix.go:216] guest clock: 1710799148.650279332
	I0318 21:59:08.706293   65699 fix.go:229] Guest: 2024-03-18 21:59:08.650279332 +0000 UTC Remote: 2024-03-18 21:59:08.586181408 +0000 UTC m=+272.029432082 (delta=64.097924ms)
	I0318 21:59:08.706337   65699 fix.go:200] guest clock delta is within tolerance: 64.097924ms
	I0318 21:59:08.706350   65699 start.go:83] releasing machines lock for "no-preload-963041", held for 19.756290817s
	I0318 21:59:08.706384   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.706707   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:08.709113   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709389   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.709417   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709561   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710009   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710155   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710229   65699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:08.710278   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.710330   65699 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:08.710349   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.713131   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713154   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713464   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713492   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713521   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713536   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713632   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713739   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713824   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713987   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713988   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714117   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.714177   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714337   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.827151   65699 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:08.833847   65699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:08.985638   65699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:08.992294   65699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:08.992372   65699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:09.009419   65699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:09.009444   65699 start.go:494] detecting cgroup driver to use...
	I0318 21:59:09.009509   65699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:09.031942   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:09.051842   65699 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:09.051901   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:09.068136   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:09.084445   65699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:09.234323   65699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:09.402144   65699 docker.go:233] disabling docker service ...
	I0318 21:59:09.402210   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:09.419960   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:09.434836   65699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:09.572242   65699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:09.718817   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:09.734607   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:09.756470   65699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:09.756533   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.768595   65699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:09.768685   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.780726   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.800700   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.817396   65699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:09.829896   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.842211   65699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.867273   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.880909   65699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:09.893254   65699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:09.893297   65699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:09.910897   65699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:09.922400   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:10.065248   65699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:10.223498   65699 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:10.223577   65699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:10.230686   65699 start.go:562] Will wait 60s for crictl version
	I0318 21:59:10.230752   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.235527   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:10.278655   65699 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:10.278756   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.310992   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.344925   65699 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:59:07.298973   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:09.799803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:10.346255   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:10.349081   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349418   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:10.349437   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349657   65699 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:10.354793   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:10.369744   65699 kubeadm.go:877] updating cluster {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:10.369893   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:59:10.369951   65699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:10.409975   65699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 21:59:10.410001   65699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:59:10.410062   65699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.410074   65699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.410086   65699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.410122   65699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.410148   65699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.410166   65699 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 21:59:10.410213   65699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.410223   65699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411690   65699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.411695   65699 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 21:59:10.411730   65699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.411747   65699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.411764   65699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.411793   65699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.553195   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 21:59:10.553249   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.555774   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.559123   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.562266   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.571390   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.592690   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.702213   65699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 21:59:10.702265   65699 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.702314   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857028   65699 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 21:59:10.857072   65699 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.857087   65699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 21:59:10.857117   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857146   65699 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.857154   65699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 21:59:10.857180   65699 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.857197   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857214   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857211   65699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 21:59:10.857250   65699 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.857254   65699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 21:59:10.857264   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.857275   65699 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.857282   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857305   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.872164   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.872195   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.872268   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.927043   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.927147   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.927095   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.927219   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.972625   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 21:59:10.972740   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:11.016239   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 21:59:11.016291   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.016356   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:11.016380   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.047703   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 21:59:11.047732   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047784   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047849   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.047952   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.069007   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 21:59:11.069064   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 21:59:11.069095   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 21:59:11.069126   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 21:59:11.069139   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.319858   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320279   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320310   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.320224   66608 retry.go:31] will retry after 253.332307ms: waiting for machine to come up
	I0318 21:59:10.575748   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576242   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576271   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.576194   66608 retry.go:31] will retry after 484.439329ms: waiting for machine to come up
	I0318 21:59:11.061837   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062291   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.062247   66608 retry.go:31] will retry after 520.757249ms: waiting for machine to come up
	I0318 21:59:11.585112   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585541   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585571   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.585485   66608 retry.go:31] will retry after 482.335377ms: waiting for machine to come up
	I0318 21:59:12.068813   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069420   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069456   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:12.069374   66608 retry.go:31] will retry after 936.563875ms: waiting for machine to come up
	I0318 21:59:13.007582   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.007986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.008012   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.007945   66608 retry.go:31] will retry after 864.468016ms: waiting for machine to come up
	I0318 21:59:13.874400   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874910   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874942   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.874875   66608 retry.go:31] will retry after 1.239808671s: waiting for machine to come up
	I0318 21:59:15.116440   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116834   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116855   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:15.116784   66608 retry.go:31] will retry after 1.208141339s: waiting for machine to come up
	I0318 21:59:11.804059   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:14.301199   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:16.301517   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:11.928081   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.330891   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.28291236s)
	I0318 21:59:14.330933   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330948   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.261785854s)
	I0318 21:59:14.330971   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330974   65699 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.402863992s)
	I0318 21:59:14.330979   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.283167958s)
	I0318 21:59:14.330996   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 21:59:14.331011   65699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 21:59:14.331019   65699 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331043   65699 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.331064   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331086   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:14.336430   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327415   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:16.327350   66608 retry.go:31] will retry after 2.24875206s: waiting for machine to come up
	I0318 21:59:18.578068   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578644   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:18.578589   66608 retry.go:31] will retry after 2.267791851s: waiting for machine to come up
	I0318 21:59:18.800406   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:20.800524   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:18.591731   65699 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.255273393s)
	I0318 21:59:18.591789   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 21:59:18.591897   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:18.591937   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.260848845s)
	I0318 21:59:18.591958   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 21:59:18.591986   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:18.592046   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:19.859577   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.267508443s)
	I0318 21:59:19.859608   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 21:59:19.859637   65699 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:19.859641   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.267714811s)
	I0318 21:59:19.859674   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 21:59:19.859685   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.847586   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848099   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848135   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:20.848048   66608 retry.go:31] will retry after 2.918466892s: waiting for machine to come up
	I0318 21:59:23.768491   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.768999   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.769030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:23.768962   66608 retry.go:31] will retry after 4.373256501s: waiting for machine to come up
	I0318 21:59:22.800765   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:24.801392   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:21.944666   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.084944906s)
	I0318 21:59:21.944700   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 21:59:21.944720   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:21.944766   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:24.714752   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.769964684s)
	I0318 21:59:24.714793   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 21:59:24.714827   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:24.714884   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.146019   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146507   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Found IP for machine: 192.168.50.150
	I0318 21:59:28.146533   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserving static IP address...
	I0318 21:59:28.146549   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has current primary IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146939   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.146966   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserved static IP address: 192.168.50.150
	I0318 21:59:28.146986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | skip adding static IP to network mk-default-k8s-diff-port-660775 - found existing host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"}
	I0318 21:59:28.147006   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Getting to WaitForSSH function...
	I0318 21:59:28.147030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for SSH to be available...
	I0318 21:59:28.149408   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149771   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.149799   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149929   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH client type: external
	I0318 21:59:28.149978   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa (-rw-------)
	I0318 21:59:28.150020   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:28.150039   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | About to run SSH command:
	I0318 21:59:28.150050   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | exit 0
	I0318 21:59:28.273437   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:28.273768   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetConfigRaw
	I0318 21:59:28.274402   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.277330   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.277757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277997   65170 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:59:28.278217   65170 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:28.278240   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:28.278435   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.280754   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281149   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.281178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281318   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.281495   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281646   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281796   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.281955   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.282163   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.282185   65170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:28.390614   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:28.390642   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.390896   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:59:28.390923   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.391095   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.394421   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.394838   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.394876   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.395178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.395410   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395775   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.395953   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.396145   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.396160   65170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-660775 && echo "default-k8s-diff-port-660775" | sudo tee /etc/hostname
	I0318 21:59:28.522303   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-660775
	
	I0318 21:59:28.522347   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.525224   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.525667   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525789   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.525961   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526122   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.526471   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.526651   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.526676   65170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-660775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-660775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-660775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:28.641488   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:28.641521   65170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:28.641547   65170 buildroot.go:174] setting up certificates
	I0318 21:59:28.641555   65170 provision.go:84] configureAuth start
	I0318 21:59:28.641564   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.641871   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.644934   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.645301   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645425   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.647753   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648089   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.648119   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648360   65170 provision.go:143] copyHostCerts
	I0318 21:59:28.648423   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:28.648435   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:28.648507   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:28.648620   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:28.648631   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:28.648660   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:28.648731   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:28.648740   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:28.648769   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:28.648829   65170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-660775 san=[127.0.0.1 192.168.50.150 default-k8s-diff-port-660775 localhost minikube]
	I0318 21:59:28.697191   65170 provision.go:177] copyRemoteCerts
	I0318 21:59:28.697253   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:28.697274   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.699919   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700237   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.700269   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700477   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.700694   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.700882   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.701060   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:28.793840   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:28.829285   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 21:59:28.857628   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:59:28.886344   65170 provision.go:87] duration metric: took 244.778215ms to configureAuth
	I0318 21:59:28.886366   65170 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:28.886527   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:59:28.886593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.889885   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890321   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.890351   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890534   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.890721   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.890879   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.891013   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.891190   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.891366   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.891399   65170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:29.189002   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:29.189033   65170 machine.go:97] duration metric: took 910.801375ms to provisionDockerMachine
	I0318 21:59:29.189046   65170 start.go:293] postStartSetup for "default-k8s-diff-port-660775" (driver="kvm2")
	I0318 21:59:29.189058   65170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:29.189083   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.189409   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:29.189438   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.192164   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.192512   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.192866   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.193045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.193190   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.277850   65170 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:29.282886   65170 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:29.282909   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:29.282975   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:29.283065   65170 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:29.283172   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:29.296052   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:29.323906   65170 start.go:296] duration metric: took 134.847993ms for postStartSetup
	I0318 21:59:29.323945   65170 fix.go:56] duration metric: took 20.61742941s for fixHost
	I0318 21:59:29.323969   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.326616   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.326920   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.327063   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.327300   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327472   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327622   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.327853   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:29.328058   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:29.328070   65170 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:29.430348   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799169.377980776
	
	I0318 21:59:29.430377   65170 fix.go:216] guest clock: 1710799169.377980776
	I0318 21:59:29.430386   65170 fix.go:229] Guest: 2024-03-18 21:59:29.377980776 +0000 UTC Remote: 2024-03-18 21:59:29.323950953 +0000 UTC m=+359.071824665 (delta=54.029823ms)
	I0318 21:59:29.430411   65170 fix.go:200] guest clock delta is within tolerance: 54.029823ms
	I0318 21:59:29.430420   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 20.723939352s
	I0318 21:59:29.430450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.430727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:29.433339   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433686   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.433713   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433865   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434308   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434531   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434632   65170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:29.434682   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.434783   65170 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:29.434811   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.437380   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437479   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437731   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437760   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437829   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437880   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.438033   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438170   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438244   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438332   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438393   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438603   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.438694   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.540670   65170 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:29.547318   65170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:29.704221   65170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:29.710762   65170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:29.710832   65170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:29.727820   65170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:29.727838   65170 start.go:494] detecting cgroup driver to use...
	I0318 21:59:29.727905   65170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:29.745750   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:29.760984   65170 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:29.761024   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:29.776639   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:29.791749   65170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:29.914380   65170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:30.096200   65170 docker.go:233] disabling docker service ...
	I0318 21:59:30.096281   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:30.112512   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:30.126090   65170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:30.258617   65170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:30.397700   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:30.420478   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:30.443197   65170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:30.443282   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.455577   65170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:30.455630   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.467898   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.480041   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.492501   65170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:30.505178   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.517657   65170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.537376   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.554749   65170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:30.570281   65170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:30.570352   65170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:30.587991   65170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:30.600354   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:30.744678   65170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:30.902192   65170 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:30.902279   65170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:30.907869   65170 start.go:562] Will wait 60s for crictl version
	I0318 21:59:30.907937   65170 ssh_runner.go:195] Run: which crictl
	I0318 21:59:30.913588   65170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:30.957344   65170 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:30.957431   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:30.991141   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:31.024452   65170 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:59:27.301221   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:29.799576   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:26.781379   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.066468133s)
	I0318 21:59:26.781415   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 21:59:26.781445   65699 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:26.781493   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:27.747707   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 21:59:27.747764   65699 cache_images.go:123] Successfully loaded all cached images
	I0318 21:59:27.747769   65699 cache_images.go:92] duration metric: took 17.337757279s to LoadCachedImages
	I0318 21:59:27.747781   65699 kubeadm.go:928] updating node { 192.168.72.84 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:59:27.747907   65699 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-963041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:27.747986   65699 ssh_runner.go:195] Run: crio config
	I0318 21:59:27.810020   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:27.810048   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:27.810060   65699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:27.810078   65699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963041 NodeName:no-preload-963041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:27.810242   65699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963041"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:27.810327   65699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:59:27.823120   65699 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:27.823172   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:27.834742   65699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:59:27.854365   65699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:59:27.872873   65699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 21:59:27.891245   65699 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:27.895305   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:27.907928   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:28.044997   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:28.064471   65699 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041 for IP: 192.168.72.84
	I0318 21:59:28.064489   65699 certs.go:194] generating shared ca certs ...
	I0318 21:59:28.064503   65699 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:28.064668   65699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:28.064733   65699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:28.064747   65699 certs.go:256] generating profile certs ...
	I0318 21:59:28.064847   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/client.key
	I0318 21:59:28.064927   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key.53f57e82
	I0318 21:59:28.064975   65699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key
	I0318 21:59:28.065090   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:28.065140   65699 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:28.065154   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:28.065190   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:28.065218   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:28.065244   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:28.065292   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:28.066189   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:28.108239   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:28.147385   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:28.191255   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:28.231079   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:59:28.269730   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:59:28.302326   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:28.331762   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:28.359487   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:28.390196   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:28.422323   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:28.452212   65699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:28.476910   65699 ssh_runner.go:195] Run: openssl version
	I0318 21:59:28.483480   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:28.495230   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500728   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500771   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.507487   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:28.520368   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:28.533700   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540767   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540817   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.549380   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:28.566307   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:28.582377   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589139   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589192   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.597396   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:28.610189   65699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:28.616488   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:28.625547   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:28.634680   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:28.643077   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:28.652470   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:28.660641   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:28.669216   65699 kubeadm.go:391] StartCluster: {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:28.669342   65699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:28.669444   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.719357   65699 cri.go:89] found id: ""
	I0318 21:59:28.719427   65699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:28.733158   65699 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:28.733179   65699 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:28.733186   65699 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:28.733234   65699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:28.744804   65699 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:28.745805   65699 kubeconfig.go:125] found "no-preload-963041" server: "https://192.168.72.84:8443"
	I0318 21:59:28.747888   65699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:28.757871   65699 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.84
	I0318 21:59:28.757896   65699 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:28.757918   65699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:28.757964   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.805988   65699 cri.go:89] found id: ""
	I0318 21:59:28.806057   65699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:28.829257   65699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:28.841515   65699 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:28.841543   65699 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:28.841594   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:59:28.853433   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:28.853499   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:28.864593   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:59:28.875236   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:28.875285   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:28.887756   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.898219   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:28.898271   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.909308   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:59:28.919480   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:28.919540   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:28.930305   65699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:28.941125   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:29.056129   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.261585   65699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.205423679s)
	I0318 21:59:30.261614   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.498583   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.589160   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.713046   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:30.713150   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.214160   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.025614   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:31.028381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028758   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:31.028783   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028960   65170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:31.033836   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:31.048652   65170 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:31.048798   65170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:59:31.048853   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:31.089246   65170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:59:31.089322   65170 ssh_runner.go:195] Run: which lz4
	I0318 21:59:31.094026   65170 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:59:31.098900   65170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:59:31.098929   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:59:33.166556   65170 crio.go:462] duration metric: took 2.072562246s to copy over tarball
	I0318 21:59:33.166639   65170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:59:31.810567   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:34.301018   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:36.346463   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:31.714009   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.762157   65699 api_server.go:72] duration metric: took 1.049110677s to wait for apiserver process to appear ...
	I0318 21:59:31.762188   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:31.762210   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:31.762737   65699 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	I0318 21:59:32.263205   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.738750   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.738785   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.738802   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.804061   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.804102   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.804116   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.842097   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.842144   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:35.262351   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.267395   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.267439   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:35.763016   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.775072   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.775109   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.262338   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:36.267165   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:36.267207   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.762879   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.074225   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:37.074263   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:37.262637   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.267514   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 21:59:37.275551   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 21:59:37.275579   65699 api_server.go:131] duration metric: took 5.513383348s to wait for apiserver health ...
	I0318 21:59:37.275590   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:37.275598   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:37.496330   65699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:37.641915   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:37.659277   65699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:37.684019   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:38.075296   65699 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:38.075333   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:38.075353   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:38.075367   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:38.075375   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:38.075388   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:38.075407   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:38.075418   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:38.075429   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:38.075440   65699 system_pods.go:74] duration metric: took 391.399859ms to wait for pod list to return data ...
	I0318 21:59:38.075452   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:38.252627   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:38.252659   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:38.252670   65699 node_conditions.go:105] duration metric: took 177.209294ms to run NodePressure ...
	I0318 21:59:38.252692   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.662257   65699 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670807   65699 kubeadm.go:733] kubelet initialised
	I0318 21:59:38.670836   65699 kubeadm.go:734] duration metric: took 8.550399ms waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670846   65699 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:38.680740   65699 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.689134   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689157   65699 pod_ready.go:81] duration metric: took 8.393104ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.689169   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689178   65699 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.693796   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693815   65699 pod_ready.go:81] duration metric: took 4.628403ms for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.693824   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693829   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.701225   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701245   65699 pod_ready.go:81] duration metric: took 7.410052ms for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.701254   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701262   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.707848   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707871   65699 pod_ready.go:81] duration metric: took 6.598987ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.707882   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707889   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.066641   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066668   65699 pod_ready.go:81] duration metric: took 358.769058ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.066679   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066687   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.466406   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466440   65699 pod_ready.go:81] duration metric: took 399.746217ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.466449   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466455   65699 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.866206   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866232   65699 pod_ready.go:81] duration metric: took 399.76891ms for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.866240   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866247   65699 pod_ready.go:38] duration metric: took 1.195391629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:39.866263   65699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:59:39.879772   65699 ops.go:34] apiserver oom_adj: -16
	I0318 21:59:39.879796   65699 kubeadm.go:591] duration metric: took 11.146603139s to restartPrimaryControlPlane
	I0318 21:59:39.879807   65699 kubeadm.go:393] duration metric: took 11.21059758s to StartCluster
	I0318 21:59:39.879825   65699 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.879915   65699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:59:39.881739   65699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.881970   65699 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:59:39.883934   65699 out.go:177] * Verifying Kubernetes components...
	I0318 21:59:39.882064   65699 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:59:39.882254   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:39.885913   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:39.885924   65699 addons.go:69] Setting metrics-server=true in profile "no-preload-963041"
	I0318 21:59:39.885932   65699 addons.go:69] Setting default-storageclass=true in profile "no-preload-963041"
	I0318 21:59:39.885950   65699 addons.go:234] Setting addon metrics-server=true in "no-preload-963041"
	W0318 21:59:39.885958   65699 addons.go:243] addon metrics-server should already be in state true
	I0318 21:59:39.885966   65699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963041"
	I0318 21:59:39.885918   65699 addons.go:69] Setting storage-provisioner=true in profile "no-preload-963041"
	I0318 21:59:39.885985   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886000   65699 addons.go:234] Setting addon storage-provisioner=true in "no-preload-963041"
	W0318 21:59:39.886052   65699 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:59:39.886075   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886384   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886403   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886437   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886392   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886448   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886438   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.902103   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0318 21:59:39.902574   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.903192   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.903211   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.903568   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.904113   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.904142   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.908122   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 21:59:39.908269   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0318 21:59:39.908566   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.908639   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.909237   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.909251   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.909662   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.909834   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.913534   65699 addons.go:234] Setting addon default-storageclass=true in "no-preload-963041"
	W0318 21:59:39.913558   65699 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:59:39.913586   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.913959   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.913992   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.921260   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.921284   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.921661   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.922725   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.922778   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.925575   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0318 21:59:39.926170   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.926799   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.926819   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.933014   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0318 21:59:39.933066   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.934464   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.934527   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.935441   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.935456   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.936236   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.936821   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.936870   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.936983   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.938986   65699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:39.940103   65699 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:39.940115   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:59:39.940128   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.942712   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943138   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.943168   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943415   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.943574   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.943690   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.943828   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.944813   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0318 21:59:39.961605   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.962117   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.962140   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.962564   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.962745   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.964606   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.970697   65699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.222479   65170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055805805s)
	I0318 21:59:36.222507   65170 crio.go:469] duration metric: took 3.055923767s to extract the tarball
	I0318 21:59:36.222515   65170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:59:36.265990   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:36.314679   65170 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:59:36.314704   65170 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:59:36.314714   65170 kubeadm.go:928] updating node { 192.168.50.150 8444 v1.28.4 crio true true} ...
	I0318 21:59:36.314828   65170 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-660775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:36.314900   65170 ssh_runner.go:195] Run: crio config
	I0318 21:59:36.375889   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:36.375908   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:36.375916   65170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:36.375935   65170 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-660775 NodeName:default-k8s-diff-port-660775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:36.376058   65170 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-660775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:36.376117   65170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:59:36.387851   65170 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:36.387905   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:36.398095   65170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0318 21:59:36.416507   65170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:59:36.437165   65170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0318 21:59:36.458125   65170 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:36.462688   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:36.476913   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:36.629523   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:36.648679   65170 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775 for IP: 192.168.50.150
	I0318 21:59:36.648697   65170 certs.go:194] generating shared ca certs ...
	I0318 21:59:36.648717   65170 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:36.648870   65170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:36.648942   65170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:36.648956   65170 certs.go:256] generating profile certs ...
	I0318 21:59:36.649061   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/client.key
	I0318 21:59:36.649136   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key.6eb93750
	I0318 21:59:36.649181   65170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key
	I0318 21:59:36.649342   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:36.649408   65170 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:36.649427   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:36.649465   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:36.649502   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:36.649524   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:36.649563   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:36.650116   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:36.709130   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:36.777530   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:36.822349   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:36.861155   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 21:59:36.899264   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:59:36.930697   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:36.960715   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:36.992062   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:37.020001   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:37.051443   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:37.080115   65170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:37.102221   65170 ssh_runner.go:195] Run: openssl version
	I0318 21:59:37.111020   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:37.127447   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132675   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132730   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.139092   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:37.151349   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:37.166470   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172601   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172656   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.179404   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:37.192628   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:37.206758   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211839   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211882   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.218285   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:37.230291   65170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:37.235312   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:37.242399   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:37.249658   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:37.256458   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:37.263110   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:37.270329   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:37.277040   65170 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:37.277140   65170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:37.277176   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.320525   65170 cri.go:89] found id: ""
	I0318 21:59:37.320595   65170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:37.332584   65170 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:37.332602   65170 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:37.332608   65170 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:37.332678   65170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:37.348017   65170 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:37.349557   65170 kubeconfig.go:125] found "default-k8s-diff-port-660775" server: "https://192.168.50.150:8444"
	I0318 21:59:37.352826   65170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:37.367223   65170 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.150
	I0318 21:59:37.367256   65170 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:37.367267   65170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:37.367315   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.411319   65170 cri.go:89] found id: ""
	I0318 21:59:37.411401   65170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:37.431545   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:37.442587   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:37.442610   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:37.442661   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 21:59:37.452384   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:37.452439   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:37.462519   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 21:59:37.472669   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:37.472728   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:37.483107   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.493177   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:37.493224   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.503546   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 21:59:37.513471   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:37.513512   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:37.524147   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:37.534940   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:37.665308   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.882330   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216992532s)
	I0318 21:59:38.882356   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.110948   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.217267   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.332300   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:39.332389   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.833190   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.972027   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 21:59:39.972078   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 21:59:39.972109   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.975122   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975608   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.975627   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975994   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.976196   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.976371   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.976663   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.982859   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0318 21:59:39.983263   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.983860   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.983904   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.984308   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.984558   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.986338   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.986645   65699 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:39.986690   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:59:39.986718   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.989398   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989741   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.989999   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989951   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.990229   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.990392   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.990517   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:40.115233   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:40.136271   65699 node_ready.go:35] waiting up to 6m0s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:40.232668   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:40.234394   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 21:59:40.234417   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 21:59:40.256237   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:40.301845   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 21:59:40.301873   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 21:59:40.354405   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:40.354435   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 21:59:40.377996   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:41.389416   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.156705132s)
	I0318 21:59:41.389429   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133120616s)
	I0318 21:59:41.389470   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389475   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389482   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389486   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389763   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389783   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389792   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389799   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389828   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.389874   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389890   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389899   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389938   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.390199   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390398   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.390339   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390375   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390451   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390470   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.397714   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.397736   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.397951   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.397999   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.398017   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.415620   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037584799s)
	I0318 21:59:41.415673   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.415684   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.415964   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.415992   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.416007   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416016   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.416027   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.416207   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.416220   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416229   65699 addons.go:470] Verifying addon metrics-server=true in "no-preload-963041"
	I0318 21:59:41.418761   65699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 21:59:38.798943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:40.800913   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:41.420038   65699 addons.go:505] duration metric: took 1.537986468s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 21:59:40.332810   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.411342   65170 api_server.go:72] duration metric: took 1.079036948s to wait for apiserver process to appear ...
	I0318 21:59:40.411371   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:40.411394   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:40.411932   65170 api_server.go:269] stopped: https://192.168.50.150:8444/healthz: Get "https://192.168.50.150:8444/healthz": dial tcp 192.168.50.150:8444: connect: connection refused
	I0318 21:59:40.911545   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.377410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.377443   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.377471   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.426410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.426468   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.426485   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.448464   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.448523   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:43.912498   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.918271   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.918309   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.411824   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.422200   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:44.422223   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.911509   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.916884   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 21:59:44.928835   65170 api_server.go:141] control plane version: v1.28.4
	I0318 21:59:44.928862   65170 api_server.go:131] duration metric: took 4.517483413s to wait for apiserver health ...
	I0318 21:59:44.928872   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:44.928881   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:44.930794   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.932164   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:44.959217   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:45.002449   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:45.017348   65170 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:45.017394   65170 system_pods.go:61] "coredns-5dd5756b68-cjq2v" [9ae899ef-63e4-407d-9013-71552ec87614] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:45.017407   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [286b98ba-bc9e-4e2f-984c-d7b2447aef15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:45.017417   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [7a0db461-f8d5-4331-993e-d7b9345159e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:45.017428   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [e4f5859a-dfcc-41d8-9a17-acb601449821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:45.017443   65170 system_pods.go:61] "kube-proxy-qt2m6" [c3c7c6db-4935-4079-b0e7-60ba2cd886b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:45.017450   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [7115eef0-5ff4-4dfe-9135-88ad8f698e43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:45.017461   65170 system_pods.go:61] "metrics-server-57f55c9bc5-5dtf5" [b19191ee-e2db-4392-82e2-1a95fae76101] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:45.017489   65170 system_pods.go:61] "storage-provisioner" [045d4b30-47a3-4c80-a9e8-c36ef7395e6c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:45.017498   65170 system_pods.go:74] duration metric: took 15.027239ms to wait for pod list to return data ...
	I0318 21:59:45.017511   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:45.020962   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:45.020982   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:45.020991   65170 node_conditions.go:105] duration metric: took 3.47292ms to run NodePressure ...
	I0318 21:59:45.021007   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:45.277662   65170 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282939   65170 kubeadm.go:733] kubelet initialised
	I0318 21:59:45.282958   65170 kubeadm.go:734] duration metric: took 5.277143ms waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282965   65170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.289546   65170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:43.299509   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:45.300875   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:42.142145   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:44.641863   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:45.640660   65699 node_ready.go:49] node "no-preload-963041" has status "Ready":"True"
	I0318 21:59:45.640686   65699 node_ready.go:38] duration metric: took 5.50437071s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:45.640697   65699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.647087   65699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652062   65699 pod_ready.go:92] pod "coredns-76f75df574-6mtzp" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.652081   65699 pod_ready.go:81] duration metric: took 4.969873ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652091   65699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.296790   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298834   65170 pod_ready.go:81] duration metric: took 9.259848ms for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.298849   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298868   65170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.307325   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307367   65170 pod_ready.go:81] duration metric: took 8.486967ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.307380   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307389   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.319473   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319498   65170 pod_ready.go:81] duration metric: took 12.100242ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.319514   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319522   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.407356   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407379   65170 pod_ready.go:81] duration metric: took 87.846686ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.407390   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407395   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806835   65170 pod_ready.go:92] pod "kube-proxy-qt2m6" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.806866   65170 pod_ready.go:81] duration metric: took 399.462221ms for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806878   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:47.814286   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:47.799616   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:50.300118   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:46.659819   65699 pod_ready.go:92] pod "etcd-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:46.659855   65699 pod_ready.go:81] duration metric: took 1.007755238s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:46.659868   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:48.669033   65699 pod_ready.go:102] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:51.168202   65699 pod_ready.go:92] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.168229   65699 pod_ready.go:81] duration metric: took 4.508354098s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.168240   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174243   65699 pod_ready.go:92] pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.174268   65699 pod_ready.go:81] duration metric: took 6.018685ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174280   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179279   65699 pod_ready.go:92] pod "kube-proxy-kkrzx" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.179300   65699 pod_ready.go:81] duration metric: took 5.012711ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179311   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185651   65699 pod_ready.go:92] pod "kube-scheduler-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.185670   65699 pod_ready.go:81] duration metric: took 6.351567ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185678   65699 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.315135   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.814432   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.798645   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:54.800561   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:53.191834   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.192346   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 21:59:55.313448   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.813863   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:56.813885   65170 pod_ready.go:81] duration metric: took 11.006997984s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:56.813893   65170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:58.820535   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.802709   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:59.299235   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:01.299761   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:57.694309   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:00.192594   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:00.821070   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.821300   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:03.301922   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.799844   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.693235   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:04.693324   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:05.322655   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.820557   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.821227   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.799881   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.802803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:06.694723   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:11.821593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:13.821792   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.299680   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.300073   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:11.693179   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.194532   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.822593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.321691   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.798687   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.802403   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.302525   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.692709   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.692875   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:20.693620   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:20.820297   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:22.821250   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:24.825337   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.800104   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:26.299105   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.193665   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.194188   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:27.321417   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:29.326924   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:28.798399   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.800456   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:27.692517   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.194633   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:31.821301   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:34.322210   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:33.299227   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.800314   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:32.693924   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.191865   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:36.324995   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.820800   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.298887   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.299570   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.193636   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:39.693273   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:40.821224   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:43.319945   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.300484   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.300924   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.302264   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.192508   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.192743   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:45.321277   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:47.821205   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.822803   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:48.799135   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.801798   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.692554   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.193102   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:51.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:52.320478   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:54.321815   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.301031   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:55.799954   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.693639   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.193657   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.820977   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.822348   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:57.804405   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.299016   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.692166   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.692526   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:01.319955   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:03.324127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.299297   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:04.299721   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.694116   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.193971   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.821257   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.320779   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:06.799599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.800534   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.301906   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:07.195710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:09.691770   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:10.321379   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:12.321708   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.324578   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:13.303344   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:15.799107   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.692795   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.192711   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:16.192974   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:16.820809   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.820856   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:17.800874   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.301479   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.691838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.692582   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:20.821237   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.320180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:22.302559   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:24.800442   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.193491   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:25.692671   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:25.322575   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.323097   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.821127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.302001   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.799369   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.693253   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.192347   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:32.324221   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.821547   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.301599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.798017   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.692835   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.693519   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:37.323580   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.325038   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.798108   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:38.799072   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:40.799797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.694495   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.192489   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:41.193100   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:41.821043   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:44.322040   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:42.802267   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.301098   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:43.692766   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.693595   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.322167   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.322495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:47.800006   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.298196   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.193259   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.195154   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:50.821598   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:53.321546   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.302659   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:54.799292   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.692794   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.192968   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.821471   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.822495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.299408   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.299964   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.193704   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.194959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:00.321126   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:02.321899   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.798818   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:03.800605   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:05.801459   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.693520   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.192449   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:06.192843   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:06.821775   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:09.322004   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.299572   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:10.799620   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.693106   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.192341   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.821139   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.821610   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.299590   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.303956   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.692089   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.693750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:16.320827   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:18.321654   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.800563   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.300888   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.694101   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.191376   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:20.322586   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.820488   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.820555   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.798797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.799501   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.192760   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.193279   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:26.821222   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.321682   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:27.302631   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.799175   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.693239   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.192207   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.193072   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:31.820868   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.822075   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.802719   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:34.301730   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.692959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.194871   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.320427   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.321178   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.799943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:39.300691   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.699873   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.193658   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:40.321388   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:42.822538   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.799159   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.800027   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.299156   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.693317   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.194419   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:45.322637   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:47.820481   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.300259   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.798429   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.693209   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.693611   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:50.322017   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.820364   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:54.820692   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.799942   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.301665   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.694058   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.194171   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:56.821255   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:58.828524   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.303516   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.791851   65211 pod_ready.go:81] duration metric: took 4m0.000068811s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	E0318 22:02:57.791889   65211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:02:57.791913   65211 pod_ready.go:38] duration metric: took 4m13.55705031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:02:57.791938   65211 kubeadm.go:591] duration metric: took 4m20.862001116s to restartPrimaryControlPlane
	W0318 22:02:57.792000   65211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:02:57.792027   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:02:57.692975   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:59.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:01.321045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:03.322008   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.694585   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:04.195750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:05.322087   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:07.820940   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:09.822652   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:06.693803   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:08.693856   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:10.694375   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:12.321504   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:14.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:13.192173   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:15.193816   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:16.822327   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.322059   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:17.691761   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.691867   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.322674   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.823374   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.692710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.695045   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.192838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.322370   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:28.820807   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.165008   65211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.372946393s)
	I0318 22:03:30.165087   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:30.184259   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:30.198417   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:30.210595   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:30.210624   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:30.210675   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:30.222159   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:30.222210   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:30.234099   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:30.244546   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:30.244621   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:30.255192   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.265777   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:30.265833   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.276674   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:30.286349   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:30.286402   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:30.296530   65211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:30.522414   65211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:03:28.193120   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.194300   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:31.321986   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:33.823045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:32.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:34.693824   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.294937   65211 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:03:39.295015   65211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:39.295142   65211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:39.295296   65211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:39.295451   65211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:39.295550   65211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:39.297047   65211 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:39.297135   65211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:39.297250   65211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:39.297368   65211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:39.297461   65211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:39.297557   65211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:39.297640   65211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:39.297742   65211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:39.297831   65211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:39.297939   65211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:39.298032   65211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:39.298084   65211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:39.298206   65211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:39.298301   65211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:39.298376   65211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:39.298451   65211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:39.298518   65211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:39.298612   65211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:39.298693   65211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:39.299829   65211 out.go:204]   - Booting up control plane ...
	I0318 22:03:39.299959   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:39.300052   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:39.300150   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:39.300308   65211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:39.300444   65211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:39.300496   65211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:39.300713   65211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:39.300829   65211 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003359 seconds
	I0318 22:03:39.300997   65211 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:03:39.301155   65211 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:03:39.301228   65211 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:03:39.301451   65211 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-141758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:03:39.301526   65211 kubeadm.go:309] [bootstrap-token] Using token: p114v6.erax4pf5xkn6x2it
	I0318 22:03:39.302903   65211 out.go:204]   - Configuring RBAC rules ...
	I0318 22:03:39.303025   65211 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:03:39.303133   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:03:39.303301   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:03:39.303479   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:03:39.303574   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:03:39.303651   65211 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:03:39.303810   65211 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:03:39.303886   65211 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:03:39.303960   65211 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:03:39.303972   65211 kubeadm.go:309] 
	I0318 22:03:39.304041   65211 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:03:39.304050   65211 kubeadm.go:309] 
	I0318 22:03:39.304158   65211 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:03:39.304173   65211 kubeadm.go:309] 
	I0318 22:03:39.304208   65211 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:03:39.304292   65211 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:03:39.304368   65211 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:03:39.304377   65211 kubeadm.go:309] 
	I0318 22:03:39.304456   65211 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:03:39.304465   65211 kubeadm.go:309] 
	I0318 22:03:39.304547   65211 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:03:39.304570   65211 kubeadm.go:309] 
	I0318 22:03:39.304649   65211 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:03:39.304754   65211 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:03:39.304861   65211 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:03:39.304878   65211 kubeadm.go:309] 
	I0318 22:03:39.305028   65211 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:03:39.305134   65211 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:03:39.305144   65211 kubeadm.go:309] 
	I0318 22:03:39.305248   65211 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305390   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:03:39.305422   65211 kubeadm.go:309] 	--control-plane 
	I0318 22:03:39.305430   65211 kubeadm.go:309] 
	I0318 22:03:39.305545   65211 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:03:39.305556   65211 kubeadm.go:309] 
	I0318 22:03:39.305676   65211 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305843   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:03:39.305859   65211 cni.go:84] Creating CNI manager for ""
	I0318 22:03:39.305873   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:03:39.307416   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:03:36.323956   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:38.821180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.308819   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:03:39.375416   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:03:39.434235   65211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:03:39.434303   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.434360   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-141758 minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=embed-certs-141758 minikube.k8s.io/primary=true
	I0318 22:03:39.677778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.708540   65211 ops.go:34] apiserver oom_adj: -16
	I0318 22:03:40.178803   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:40.678832   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:37.193451   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.193667   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:40.821359   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.323788   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:41.678334   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.177921   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.678115   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.178034   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.678655   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.177993   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.678581   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.177929   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.678124   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:46.178423   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.693587   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.693965   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.195060   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:45.821472   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:47.822362   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.678288   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.178394   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.678824   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.678144   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.178090   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.678295   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.178829   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.677856   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:51.177778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.197085   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:50.693056   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:51.192418   65699 pod_ready.go:81] duration metric: took 4m0.006727095s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:51.192452   65699 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0318 22:03:51.192462   65699 pod_ready.go:38] duration metric: took 4m5.551753918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:51.192480   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:51.192514   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:51.192574   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:51.248553   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:51.248575   65699 cri.go:89] found id: ""
	I0318 22:03:51.248583   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:51.248634   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.254205   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:51.254270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:51.303508   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.303534   65699 cri.go:89] found id: ""
	I0318 22:03:51.303543   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:51.303600   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.310160   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:51.310212   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:51.357409   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:51.357429   65699 cri.go:89] found id: ""
	I0318 22:03:51.357436   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:51.357480   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.362683   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:51.362744   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:51.413520   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.413550   65699 cri.go:89] found id: ""
	I0318 22:03:51.413560   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:51.413619   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.419412   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:51.419483   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:51.468338   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:51.468365   65699 cri.go:89] found id: ""
	I0318 22:03:51.468374   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:51.468432   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.474006   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:51.474070   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:51.520166   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:51.520188   65699 cri.go:89] found id: ""
	I0318 22:03:51.520195   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:51.520246   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.526087   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:51.526148   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:51.570735   65699 cri.go:89] found id: ""
	I0318 22:03:51.570761   65699 logs.go:276] 0 containers: []
	W0318 22:03:51.570772   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:51.570779   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:51.570832   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:51.678380   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.178543   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.677807   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.814739   65211 kubeadm.go:1107] duration metric: took 13.380493852s to wait for elevateKubeSystemPrivileges
	W0318 22:03:52.814773   65211 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:03:52.814782   65211 kubeadm.go:393] duration metric: took 5m15.94869953s to StartCluster
	I0318 22:03:52.814803   65211 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.814883   65211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:03:52.816928   65211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.817192   65211 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:03:52.818800   65211 out.go:177] * Verifying Kubernetes components...
	I0318 22:03:52.817486   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:03:52.817499   65211 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:03:52.820175   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:03:52.818838   65211 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-141758"
	I0318 22:03:52.820277   65211 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-141758"
	W0318 22:03:52.820288   65211 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:03:52.818844   65211 addons.go:69] Setting metrics-server=true in profile "embed-certs-141758"
	I0318 22:03:52.820369   65211 addons.go:234] Setting addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:52.818848   65211 addons.go:69] Setting default-storageclass=true in profile "embed-certs-141758"
	I0318 22:03:52.820429   65211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-141758"
	I0318 22:03:52.820317   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	W0318 22:03:52.820386   65211 addons.go:243] addon metrics-server should already be in state true
	I0318 22:03:52.820697   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.820821   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820846   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.820872   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820899   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.821079   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.821107   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.839829   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0318 22:03:52.839850   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0318 22:03:52.839992   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840448   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840986   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841010   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841124   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841144   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841148   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841162   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841385   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841428   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841557   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841639   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.842001   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842043   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.842049   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842068   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.845295   65211 addons.go:234] Setting addon default-storageclass=true in "embed-certs-141758"
	W0318 22:03:52.845315   65211 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:03:52.845343   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.845692   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.845736   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.864111   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0318 22:03:52.864141   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0318 22:03:52.864614   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.864688   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.865181   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865199   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865318   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865334   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865556   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866107   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.866147   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.866343   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866630   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.868253   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.870076   65211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:03:52.871315   65211 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:52.871333   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:03:52.871352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.873922   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 22:03:52.874420   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.874924   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.874944   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.875080   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.875194   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.875254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.875346   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.875478   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.875718   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.875733   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.876060   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.876234   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.877582   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.879040   65211 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:03:50.320724   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.321791   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:54.821845   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.880124   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:03:52.880135   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:03:52.880152   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.882530   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.882957   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.882979   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.883230   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.883371   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.883507   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.883638   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.886181   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0318 22:03:52.886563   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.887043   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.887064   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.887416   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.887599   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.888998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.889490   65211 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:52.889504   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:03:52.889519   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.891985   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892380   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.892435   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.892776   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.892949   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.893066   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:53.047557   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:03:53.098470   65211 node_ready.go:35] waiting up to 6m0s for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111074   65211 node_ready.go:49] node "embed-certs-141758" has status "Ready":"True"
	I0318 22:03:53.111093   65211 node_ready.go:38] duration metric: took 12.593803ms for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111102   65211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:53.127297   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:53.167460   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:03:53.167476   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:03:53.199789   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:53.221070   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:53.233431   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:03:53.233452   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:03:53.298339   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:53.298368   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:03:53.415046   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:55.057164   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85734001s)
	I0318 22:03:55.057233   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057252   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.057590   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057601   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.057614   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057634   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057888   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057929   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.064097   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.064111   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.064376   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.064402   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.064418   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.138948   65211 pod_ready.go:92] pod "coredns-5dd5756b68-k675p" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.138968   65211 pod_ready.go:81] duration metric: took 2.011647544s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.138976   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150187   65211 pod_ready.go:92] pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.150204   65211 pod_ready.go:81] duration metric: took 11.222328ms for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150213   65211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157054   65211 pod_ready.go:92] pod "etcd-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.157073   65211 pod_ready.go:81] duration metric: took 6.853876ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157086   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.167962   65211 pod_ready.go:92] pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.167986   65211 pod_ready.go:81] duration metric: took 10.892042ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.168000   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177187   65211 pod_ready.go:92] pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.177204   65211 pod_ready.go:81] duration metric: took 9.197593ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177213   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.515883   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.294780085s)
	I0318 22:03:55.515937   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.515948   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.515952   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.100869127s)
	I0318 22:03:55.515994   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516014   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516301   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516378   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516469   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516481   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516491   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516406   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516451   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516665   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516683   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516691   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516772   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516839   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516867   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519334   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.519340   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.519355   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519365   65211 addons.go:470] Verifying addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:55.520941   65211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 22:03:55.522318   65211 addons.go:505] duration metric: took 2.704813533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 22:03:55.545590   65211 pod_ready.go:92] pod "kube-proxy-jltc7" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.545614   65211 pod_ready.go:81] duration metric: took 368.395697ms for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.545625   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932726   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.932750   65211 pod_ready.go:81] duration metric: took 387.117475ms for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932757   65211 pod_ready.go:38] duration metric: took 2.821645915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:55.932771   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:55.932815   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.969924   65211 api_server.go:72] duration metric: took 3.152691986s to wait for apiserver process to appear ...
	I0318 22:03:55.969955   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.969977   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 22:03:55.976004   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 22:03:55.977450   65211 api_server.go:141] control plane version: v1.28.4
	I0318 22:03:55.977489   65211 api_server.go:131] duration metric: took 7.525909ms to wait for apiserver health ...
	I0318 22:03:55.977499   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:56.138403   65211 system_pods.go:59] 9 kube-system pods found
	I0318 22:03:56.138429   65211 system_pods.go:61] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.138434   65211 system_pods.go:61] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.138438   65211 system_pods.go:61] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.138441   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.138444   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.138448   65211 system_pods.go:61] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.138453   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.138462   65211 system_pods.go:61] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.138519   65211 system_pods.go:61] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:03:56.138532   65211 system_pods.go:74] duration metric: took 161.01924ms to wait for pod list to return data ...
	I0318 22:03:56.138544   65211 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:03:56.331884   65211 default_sa.go:45] found service account: "default"
	I0318 22:03:56.331926   65211 default_sa.go:55] duration metric: took 193.36174ms for default service account to be created ...
	I0318 22:03:56.331937   65211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:03:56.536411   65211 system_pods.go:86] 9 kube-system pods found
	I0318 22:03:56.536443   65211 system_pods.go:89] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.536452   65211 system_pods.go:89] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.536459   65211 system_pods.go:89] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.536466   65211 system_pods.go:89] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.536472   65211 system_pods.go:89] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.536479   65211 system_pods.go:89] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.536486   65211 system_pods.go:89] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.536497   65211 system_pods.go:89] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.536507   65211 system_pods.go:89] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Running
	I0318 22:03:56.536518   65211 system_pods.go:126] duration metric: took 204.57366ms to wait for k8s-apps to be running ...
	I0318 22:03:56.536531   65211 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:03:56.536579   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:56.557315   65211 system_svc.go:56] duration metric: took 20.775851ms WaitForService to wait for kubelet
	I0318 22:03:56.557344   65211 kubeadm.go:576] duration metric: took 3.740121987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:03:56.557375   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:03:51.614216   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:51.614235   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.614239   65699 cri.go:89] found id: ""
	I0318 22:03:51.614245   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:51.614297   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.619100   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.623808   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:51.623827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:51.780027   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:51.780067   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.842134   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:51.842167   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.889769   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:51.889797   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.942502   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:51.942543   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:52.467986   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:52.468043   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:52.518980   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:52.519023   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:52.536546   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:52.536586   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:52.591854   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:52.591894   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:52.640783   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:52.640818   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:52.687934   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:52.687967   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:52.749690   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:52.749726   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:52.807019   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:52.807064   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:55.392930   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.415406   65699 api_server.go:72] duration metric: took 4m15.533409678s to wait for apiserver process to appear ...
	I0318 22:03:55.415435   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.415472   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:55.415523   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:55.474200   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:55.474227   65699 cri.go:89] found id: ""
	I0318 22:03:55.474237   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:55.474295   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.479787   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:55.479907   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:55.532114   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:55.532136   65699 cri.go:89] found id: ""
	I0318 22:03:55.532145   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:55.532202   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.537215   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:55.537270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:55.588633   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:55.588657   65699 cri.go:89] found id: ""
	I0318 22:03:55.588666   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:55.588723   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.595711   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:55.595777   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:55.646684   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:55.646704   65699 cri.go:89] found id: ""
	I0318 22:03:55.646714   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:55.646770   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.651920   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:55.651982   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:55.694948   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:55.694975   65699 cri.go:89] found id: ""
	I0318 22:03:55.694984   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:55.695035   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.700275   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:55.700343   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:55.740536   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:55.740559   65699 cri.go:89] found id: ""
	I0318 22:03:55.740568   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:55.740618   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.745384   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:55.745446   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:55.784614   65699 cri.go:89] found id: ""
	I0318 22:03:55.784645   65699 logs.go:276] 0 containers: []
	W0318 22:03:55.784657   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:55.784664   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:55.784727   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:55.827306   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:55.827334   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:55.827341   65699 cri.go:89] found id: ""
	I0318 22:03:55.827349   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:55.827404   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.832314   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.838497   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:55.838520   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:55.857285   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:55.857319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:55.984597   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:55.984629   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:56.044283   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:56.044339   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:56.100329   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:56.100363   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:56.173231   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:56.173270   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:56.221280   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:56.221310   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:56.274110   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:56.274138   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:56.332863   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:56.332891   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:56.374289   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:56.374317   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:56.423793   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:56.423827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:56.478696   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:56.478734   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:56.518600   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:56.518627   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:56.731788   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:03:56.731810   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 22:03:56.731823   65211 node_conditions.go:105] duration metric: took 174.442649ms to run NodePressure ...
	I0318 22:03:56.731835   65211 start.go:240] waiting for startup goroutines ...
	I0318 22:03:56.731845   65211 start.go:245] waiting for cluster config update ...
	I0318 22:03:56.731857   65211 start.go:254] writing updated cluster config ...
	I0318 22:03:56.732109   65211 ssh_runner.go:195] Run: rm -f paused
	I0318 22:03:56.778660   65211 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:03:56.780431   65211 out.go:177] * Done! kubectl is now configured to use "embed-certs-141758" cluster and "default" namespace by default
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:56.814631   65170 pod_ready.go:81] duration metric: took 4m0.000725499s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:56.814661   65170 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:03:56.814684   65170 pod_ready.go:38] duration metric: took 4m11.531709977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:56.814712   65170 kubeadm.go:591] duration metric: took 4m19.482098142s to restartPrimaryControlPlane
	W0318 22:03:56.814767   65170 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:56.814797   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:59.480665   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 22:03:59.485792   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 22:03:59.487343   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:03:59.487364   65699 api_server.go:131] duration metric: took 4.071921663s to wait for apiserver health ...
	I0318 22:03:59.487375   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:59.487406   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:59.487462   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:59.540845   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:59.540872   65699 cri.go:89] found id: ""
	I0318 22:03:59.540881   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:59.540958   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.547759   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:59.547824   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:59.593015   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:59.593042   65699 cri.go:89] found id: ""
	I0318 22:03:59.593051   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:59.593106   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.598169   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:59.598233   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:59.638484   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:59.638508   65699 cri.go:89] found id: ""
	I0318 22:03:59.638517   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:59.638575   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.643353   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:59.643416   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:59.687190   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:59.687208   65699 cri.go:89] found id: ""
	I0318 22:03:59.687216   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:59.687271   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.692481   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:59.692550   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:59.735798   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:59.735824   65699 cri.go:89] found id: ""
	I0318 22:03:59.735834   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:59.735893   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.742192   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:59.742263   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:59.782961   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:59.782989   65699 cri.go:89] found id: ""
	I0318 22:03:59.783000   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:59.783060   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.788247   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:59.788325   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:59.836955   65699 cri.go:89] found id: ""
	I0318 22:03:59.836983   65699 logs.go:276] 0 containers: []
	W0318 22:03:59.836992   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:59.836998   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:59.837052   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:59.879225   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:59.879250   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:59.879255   65699 cri.go:89] found id: ""
	I0318 22:03:59.879264   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:59.879323   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.884380   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.889289   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:59.889316   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:04:00.307344   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:04:00.307389   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:04:00.325472   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:04:00.325496   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:04:00.388254   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:04:00.388288   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:04:00.430203   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:04:00.430241   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:04:00.476834   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:04:00.476861   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:04:00.532672   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:04:00.532703   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:04:00.572174   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:04:00.572202   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:04:00.624250   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:04:00.624283   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:04:00.688520   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:04:00.688551   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:04:00.764279   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:04:00.764319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:04:00.903231   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:04:00.903262   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:04:00.974836   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:04:00.974869   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:04:03.547135   65699 system_pods.go:59] 8 kube-system pods found
	I0318 22:04:03.547166   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.547172   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.547180   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.547186   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.547193   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.547198   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.547208   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.547214   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.547224   65699 system_pods.go:74] duration metric: took 4.059842092s to wait for pod list to return data ...
	I0318 22:04:03.547233   65699 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:03.554656   65699 default_sa.go:45] found service account: "default"
	I0318 22:04:03.554682   65699 default_sa.go:55] duration metric: took 7.437557ms for default service account to be created ...
	I0318 22:04:03.554692   65699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:03.562342   65699 system_pods.go:86] 8 kube-system pods found
	I0318 22:04:03.562369   65699 system_pods.go:89] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.562374   65699 system_pods.go:89] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.562378   65699 system_pods.go:89] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.562383   65699 system_pods.go:89] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.562387   65699 system_pods.go:89] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.562391   65699 system_pods.go:89] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.562397   65699 system_pods.go:89] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.562402   65699 system_pods.go:89] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.562410   65699 system_pods.go:126] duration metric: took 7.712357ms to wait for k8s-apps to be running ...
	I0318 22:04:03.562424   65699 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:03.562470   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:03.579949   65699 system_svc.go:56] duration metric: took 17.517801ms WaitForService to wait for kubelet
	I0318 22:04:03.579977   65699 kubeadm.go:576] duration metric: took 4m23.697982351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:03.579993   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:03.585009   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:03.585037   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:03.585049   65699 node_conditions.go:105] duration metric: took 5.050614ms to run NodePressure ...
	I0318 22:04:03.585063   65699 start.go:240] waiting for startup goroutines ...
	I0318 22:04:03.585075   65699 start.go:245] waiting for cluster config update ...
	I0318 22:04:03.585089   65699 start.go:254] writing updated cluster config ...
	I0318 22:04:03.585426   65699 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:03.634969   65699 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:04:03.637561   65699 out.go:177] * Done! kubectl is now configured to use "no-preload-963041" cluster and "default" namespace by default
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:29.143869   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.329052492s)
	I0318 22:04:29.143935   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:29.161708   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:04:29.173738   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:04:29.185221   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:04:29.185241   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 22:04:29.185273   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 22:04:29.196326   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:04:29.196382   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:04:29.207305   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 22:04:29.217759   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:04:29.217811   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:04:29.228350   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.239148   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:04:29.239191   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.251191   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 22:04:29.262291   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:04:29.262339   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:04:29.273343   65170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:04:29.332561   65170 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:04:29.333329   65170 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:04:29.496432   65170 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:04:29.496558   65170 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:04:29.496720   65170 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:04:29.728202   65170 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:04:29.730047   65170 out.go:204]   - Generating certificates and keys ...
	I0318 22:04:29.730126   65170 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:04:29.730202   65170 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:04:29.730297   65170 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:04:29.730669   65170 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:04:29.731209   65170 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:04:29.731887   65170 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:04:29.732569   65170 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:04:29.733362   65170 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:04:29.734045   65170 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:04:29.734477   65170 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:04:29.735264   65170 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:04:29.735340   65170 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:04:30.122363   65170 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:04:30.296021   65170 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:04:30.555774   65170 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:04:30.674403   65170 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:04:30.674943   65170 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:04:30.677509   65170 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:04:30.679219   65170 out.go:204]   - Booting up control plane ...
	I0318 22:04:30.679319   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:04:30.679402   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:04:30.681975   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:04:30.701015   65170 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:04:30.701902   65170 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:04:30.702104   65170 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:04:30.843019   65170 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:04:36.846312   65170 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002976 seconds
	I0318 22:04:36.846520   65170 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:04:36.870892   65170 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:04:37.410373   65170 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:04:37.410649   65170 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-660775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:04:37.935730   65170 kubeadm.go:309] [bootstrap-token] Using token: jwgiie.tp4r5ug6emevtbxj
	I0318 22:04:37.937024   65170 out.go:204]   - Configuring RBAC rules ...
	I0318 22:04:37.937156   65170 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:04:37.943204   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:04:37.951400   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:04:37.958005   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:04:37.962013   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:04:37.965783   65170 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:04:37.985150   65170 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:04:38.241561   65170 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:04:38.355495   65170 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:04:38.356452   65170 kubeadm.go:309] 
	I0318 22:04:38.356511   65170 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:04:38.356520   65170 kubeadm.go:309] 
	I0318 22:04:38.356598   65170 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:04:38.356609   65170 kubeadm.go:309] 
	I0318 22:04:38.356667   65170 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:04:38.356774   65170 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:04:38.356828   65170 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:04:38.356844   65170 kubeadm.go:309] 
	I0318 22:04:38.356898   65170 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:04:38.356916   65170 kubeadm.go:309] 
	I0318 22:04:38.356976   65170 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:04:38.356984   65170 kubeadm.go:309] 
	I0318 22:04:38.357030   65170 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:04:38.357093   65170 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:04:38.357161   65170 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:04:38.357168   65170 kubeadm.go:309] 
	I0318 22:04:38.357263   65170 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:04:38.357364   65170 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:04:38.357376   65170 kubeadm.go:309] 
	I0318 22:04:38.357495   65170 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.357657   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:04:38.357707   65170 kubeadm.go:309] 	--control-plane 
	I0318 22:04:38.357724   65170 kubeadm.go:309] 
	I0318 22:04:38.357861   65170 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:04:38.357873   65170 kubeadm.go:309] 
	I0318 22:04:38.357986   65170 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.358144   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:04:38.358726   65170 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:38.358772   65170 cni.go:84] Creating CNI manager for ""
	I0318 22:04:38.358789   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:04:38.360246   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:04:38.361264   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:04:38.378420   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:04:38.482111   65170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:04:38.482178   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:38.482194   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-660775 minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=default-k8s-diff-port-660775 minikube.k8s.io/primary=true
	I0318 22:04:38.617420   65170 ops.go:34] apiserver oom_adj: -16
	I0318 22:04:38.828087   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.328292   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.828411   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.328829   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.828338   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.329118   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.828239   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.328296   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.828241   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.329151   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.829036   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.328224   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.828465   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.328632   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.828289   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.328321   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.828493   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.329008   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.828789   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.328727   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.829024   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.329010   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.828311   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.328474   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.445593   65170 kubeadm.go:1107] duration metric: took 11.963480655s to wait for elevateKubeSystemPrivileges
	W0318 22:04:50.445640   65170 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:04:50.445651   65170 kubeadm.go:393] duration metric: took 5m13.168616417s to StartCluster
	I0318 22:04:50.445672   65170 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.445754   65170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:04:50.447789   65170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.448086   65170 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:04:50.449989   65170 out.go:177] * Verifying Kubernetes components...
	I0318 22:04:50.448238   65170 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:04:50.450030   65170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450044   65170 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450068   65170 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-660775"
	I0318 22:04:50.450070   65170 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.450078   65170 addons.go:243] addon storage-provisioner should already be in state true
	W0318 22:04:50.450082   65170 addons.go:243] addon metrics-server should already be in state true
	I0318 22:04:50.450105   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450116   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450033   65170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450181   65170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-660775"
	I0318 22:04:50.450493   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450516   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450585   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450628   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.448310   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:04:50.452465   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:04:50.466764   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0318 22:04:50.468214   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0318 22:04:50.468460   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.468676   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469019   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469038   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469182   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469195   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469254   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0318 22:04:50.469549   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.469605   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469603   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470035   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.470053   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.470320   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470350   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470381   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470385   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470395   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470535   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.473854   65170 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.473879   65170 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:04:50.473907   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.474268   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.474301   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.485707   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0318 22:04:50.486097   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0318 22:04:50.486278   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486675   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486809   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.486818   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487074   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.487086   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487345   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487513   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487561   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.487759   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.489284   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.491084   65170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:04:50.489730   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.492156   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0318 22:04:50.492539   65170 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.492549   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:04:50.492563   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.494057   65170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:04:50.492998   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.495232   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:04:50.495253   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:04:50.495275   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.495863   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.495887   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.495952   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.496340   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496476   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.496620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.496757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.496861   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.497350   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.498004   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.498047   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.498450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.499027   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499235   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.499406   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.499565   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.499691   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.515126   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0318 22:04:50.515913   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.516473   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.516498   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.516800   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.517008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.518559   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.518811   65170 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.518825   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:04:50.518842   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.522625   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523156   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.523537   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523810   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.523984   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.524193   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.524430   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.682066   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:04:50.699269   65170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709309   65170 node_ready.go:49] node "default-k8s-diff-port-660775" has status "Ready":"True"
	I0318 22:04:50.709330   65170 node_ready.go:38] duration metric: took 10.026001ms for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709342   65170 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.713958   65170 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720434   65170 pod_ready.go:92] pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.720459   65170 pod_ready.go:81] duration metric: took 6.477329ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720471   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725799   65170 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.725820   65170 pod_ready.go:81] duration metric: took 5.341405ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725829   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.730987   65170 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.731006   65170 pod_ready.go:81] duration metric: took 5.171376ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.731016   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737458   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.737481   65170 pod_ready.go:81] duration metric: took 6.458242ms for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737490   65170 pod_ready.go:38] duration metric: took 28.137606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.737506   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:04:50.737560   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:04:50.757770   65170 api_server.go:72] duration metric: took 309.622189ms to wait for apiserver process to appear ...
	I0318 22:04:50.757795   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:04:50.757815   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 22:04:50.765732   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 22:04:50.769202   65170 api_server.go:141] control plane version: v1.28.4
	I0318 22:04:50.769228   65170 api_server.go:131] duration metric: took 11.424563ms to wait for apiserver health ...
	I0318 22:04:50.769238   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:04:50.831223   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.859994   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:04:50.860014   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:04:50.864994   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.905212   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:04:50.905257   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:04:50.918389   65170 system_pods.go:59] 4 kube-system pods found
	I0318 22:04:50.918416   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:50.918422   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:50.918426   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:50.918429   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:50.918435   65170 system_pods.go:74] duration metric: took 149.190745ms to wait for pod list to return data ...
	I0318 22:04:50.918442   65170 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:50.993150   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:50.993174   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:04:51.056974   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:51.124585   65170 default_sa.go:45] found service account: "default"
	I0318 22:04:51.124612   65170 default_sa.go:55] duration metric: took 206.163161ms for default service account to be created ...
	I0318 22:04:51.124624   65170 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:51.347373   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.347408   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347419   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347426   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.347433   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.347440   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.347452   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.347458   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.347478   65170 retry.go:31] will retry after 201.830143ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.556559   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.556594   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556605   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556621   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.556630   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.556638   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.556648   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.556663   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.556681   65170 retry.go:31] will retry after 312.139871ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.878515   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.878546   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878554   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878562   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.878568   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.878573   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.878579   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.878582   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.878596   65170 retry.go:31] will retry after 379.864885ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.364944   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.364971   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364979   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364987   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.364995   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.365002   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.365011   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:52.365018   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.365039   65170 retry.go:31] will retry after 598.040475ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.752856   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921596456s)
	I0318 22:04:52.752915   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.752928   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753278   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753303   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.753314   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.753323   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753565   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753580   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.781081   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.781102   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.781396   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.781417   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.973228   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.973256   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Running
	I0318 22:04:52.973262   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Running
	I0318 22:04:52.973269   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.973275   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.973282   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.973289   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Running
	I0318 22:04:52.973295   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.973304   65170 system_pods.go:126] duration metric: took 1.848673952s to wait for k8s-apps to be running ...
	I0318 22:04:52.973310   65170 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:52.973361   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:53.343164   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.286142485s)
	I0318 22:04:53.343193   65170 system_svc.go:56] duration metric: took 369.874916ms WaitForService to wait for kubelet
	I0318 22:04:53.343215   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343229   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343216   65170 kubeadm.go:576] duration metric: took 2.89507195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:53.343238   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:53.343265   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478242665s)
	I0318 22:04:53.343301   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343311   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343510   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.343555   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.343564   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.343572   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343580   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345065   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345078   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345082   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345065   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345094   65170 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-660775"
	I0318 22:04:53.345094   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345117   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345127   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.345136   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345401   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345400   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345419   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.347668   65170 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0318 22:04:53.348839   65170 addons.go:505] duration metric: took 2.900603006s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0318 22:04:53.363245   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:53.363274   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:53.363307   65170 node_conditions.go:105] duration metric: took 20.053581ms to run NodePressure ...
	I0318 22:04:53.363325   65170 start.go:240] waiting for startup goroutines ...
	I0318 22:04:53.363339   65170 start.go:245] waiting for cluster config update ...
	I0318 22:04:53.363353   65170 start.go:254] writing updated cluster config ...
	I0318 22:04:53.363674   65170 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:53.429018   65170 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:04:53.430584   65170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-660775" cluster and "default" namespace by default
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.577698921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800162577674181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae840c56-bd7c-44cd-afab-695b15d581d5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.578788592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec288ce0-97a5-41ac-a628-fa5991364d7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.578853643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec288ce0-97a5-41ac-a628-fa5991364d7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.578919234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ec288ce0-97a5-41ac-a628-fa5991364d7f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.619260017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bf2f286-a65a-439e-997f-7268ab51c18c name=/runtime.v1.RuntimeService/Version
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.619395958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bf2f286-a65a-439e-997f-7268ab51c18c name=/runtime.v1.RuntimeService/Version
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.621274916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fd9f538-1e81-4cdd-8ff6-9030331a7dcb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.621877441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800162621851158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fd9f538-1e81-4cdd-8ff6-9030331a7dcb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.622628574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e82b534-6902-434a-b126-71dc78decbc2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.622685799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e82b534-6902-434a-b126-71dc78decbc2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.622744836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3e82b534-6902-434a-b126-71dc78decbc2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.659813029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72686594-93de-4a17-bee5-5f10ac902641 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.659881807Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72686594-93de-4a17-bee5-5f10ac902641 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.661415396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc5f0103-1a6a-44e8-9e94-fda2287f3fb2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.661889339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800162661861130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc5f0103-1a6a-44e8-9e94-fda2287f3fb2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.662520925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e281e01f-b81f-4010-9c5d-9f7e393f8cc8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.662571704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e281e01f-b81f-4010-9c5d-9f7e393f8cc8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.662618126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e281e01f-b81f-4010-9c5d-9f7e393f8cc8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.704761563Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5e7aa4f-56cc-4aeb-910a-7306845547a3 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.704862347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5e7aa4f-56cc-4aeb-910a-7306845547a3 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.706400003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=130129e5-d61f-4f97-bccf-d2919717bb10 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.706928073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800162706893656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=130129e5-d61f-4f97-bccf-d2919717bb10 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.708327918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62e449f4-032e-4e17-a6a1-0d30d82d0034 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.708477769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62e449f4-032e-4e17-a6a1-0d30d82d0034 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:16:02 old-k8s-version-648232 crio[655]: time="2024-03-18 22:16:02.708567374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62e449f4-032e-4e17-a6a1-0d30d82d0034 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 21:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055911] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044955] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.805580] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387906] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.740121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.008894] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062333] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062747] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.184160] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.169340] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.291707] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +7.214394] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.068357] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.838802] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Mar18 21:59] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 22:03] systemd-fstab-generator[4983]: Ignoring "noauto" option for root device
	[Mar18 22:05] systemd-fstab-generator[5260]: Ignoring "noauto" option for root device
	[  +0.070654] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:16:02 up 17 min,  0 users,  load average: 0.06, 0.12, 0.09
	Linux old-k8s-version-648232 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: goroutine 158 [select]:
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000591540, 0x1, 0x0, 0x0, 0x0, 0x0)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bf4180, 0x0, 0x0)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0000d36c0)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: goroutine 159 [syscall]:
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: syscall.Syscall6(0xe8, 0xc, 0xc000e8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000e8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000d397a0, 0x0, 0x0, 0x0)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000591e00)
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Mar 18 22:16:02 old-k8s-version-648232 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Mar 18 22:16:02 old-k8s-version-648232 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 22:16:02 old-k8s-version-648232 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (236.473548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-648232" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141758 -n embed-certs-141758
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 22:18:59.431792437 +0000 UTC m=+6596.333580885
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-141758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-141758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.763µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-141758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-141758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-141758 logs -n 25: (1.439574011s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 22:17 UTC | 18 Mar 24 22:17 UTC |
	| start   | -p newest-cni-962491 --memory=2200 --alsologtostderr   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:17 UTC | 18 Mar 24 22:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	| addons  | enable metrics-server -p newest-cni-962491             | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-962491                                   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-962491                  | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-962491 --memory=2200 --alsologtostderr   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:18:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:18:59.245638   71620 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:18:59.245751   71620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:18:59.245760   71620 out.go:304] Setting ErrFile to fd 2...
	I0318 22:18:59.245764   71620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:18:59.245952   71620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 22:18:59.246519   71620 out.go:298] Setting JSON to false
	I0318 22:18:59.247486   71620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7283,"bootTime":1710793056,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:18:59.247541   71620 start.go:139] virtualization: kvm guest
	I0318 22:18:59.249985   71620 out.go:177] * [newest-cni-962491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:18:59.251333   71620 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 22:18:59.252673   71620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:18:59.251400   71620 notify.go:220] Checking for updates...
	I0318 22:18:59.254071   71620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:18:59.255238   71620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 22:18:59.256612   71620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 22:18:59.257979   71620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 22:18:59.259741   71620 config.go:182] Loaded profile config "newest-cni-962491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 22:18:59.260269   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:18:59.260317   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:18:59.275900   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0318 22:18:59.276381   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:18:59.276980   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:18:59.277023   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:18:59.277397   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:18:59.277574   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:59.277804   71620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:18:59.278068   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:18:59.278101   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:18:59.295102   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0318 22:18:59.295615   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:18:59.296044   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:18:59.296068   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:18:59.296385   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:18:59.296562   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:59.331391   71620 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 22:18:59.332703   71620 start.go:297] selected driver: kvm2
	I0318 22:18:59.332716   71620 start.go:901] validating driver "kvm2" against &{Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:18:59.332839   71620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 22:18:59.333554   71620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:18:59.333622   71620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:18:59.347853   71620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:18:59.348246   71620 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 22:18:59.348318   71620 cni.go:84] Creating CNI manager for ""
	I0318 22:18:59.348335   71620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:18:59.348380   71620 start.go:340] cluster config:
	{Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:18:59.348500   71620 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:18:59.350337   71620 out.go:177] * Starting "newest-cni-962491" primary control-plane node in "newest-cni-962491" cluster
	I0318 22:18:59.351708   71620 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 22:18:59.351752   71620 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 22:18:59.351764   71620 cache.go:56] Caching tarball of preloaded images
	I0318 22:18:59.351854   71620 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 22:18:59.351869   71620 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 22:18:59.352012   71620 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/config.json ...
	I0318 22:18:59.352218   71620 start.go:360] acquireMachinesLock for newest-cni-962491: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 22:18:59.352262   71620 start.go:364] duration metric: took 25.466µs to acquireMachinesLock for "newest-cni-962491"
	I0318 22:18:59.352275   71620 start.go:96] Skipping create...Using existing machine configuration
	I0318 22:18:59.352282   71620 fix.go:54] fixHost starting: 
	I0318 22:18:59.352614   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:18:59.352656   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:18:59.366964   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0318 22:18:59.367454   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:18:59.367969   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:18:59.367990   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:18:59.368314   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:18:59.368494   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:59.368643   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:18:59.370149   71620 fix.go:112] recreateIfNeeded on newest-cni-962491: state=Stopped err=<nil>
	I0318 22:18:59.370187   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	W0318 22:18:59.370343   71620 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 22:18:59.372502   71620 out.go:177] * Restarting existing kvm2 VM for "newest-cni-962491" ...
	
	
	==> CRI-O <==
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.275663627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a16bddf9-5b16-477b-86b7-214703173e14 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.277635187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0702c153-d17d-4219-a51e-101c8976aeaf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.278234377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800340278211345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0702c153-d17d-4219-a51e-101c8976aeaf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.282373943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1a5051d-e718-413e-a81f-e30f5ea70d42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.282461074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1a5051d-e718-413e-a81f-e30f5ea70d42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.283161091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,PodSandboxId:602595fb2f2c5d54885221d7860233eb18e6f916bfea6fa242d4ec6e4143cd74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799436002382334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,PodSandboxId:e59b154fdcb690ee5d092c750083610ce85f7171cfda4cdfb540cf6728009f2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433963369730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,PodSandboxId:67e1403b890e34cea4c112f7c9b20ebe59252d39d0b8e60b19028835d36a4d5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799433600202876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64
02012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,PodSandboxId:bd90f5c62c645ba5c0297a07dbf8cff27017e9ac21595aaeaae0aa7861a72bca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433718407713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531c
f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,PodSandboxId:0901292a3664b1a15050f2dd3a9e84867cbac9c631a16d17612f801c38244946,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710799413035311980,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,PodSandboxId:975040c926ee84eefbd295605d433a35e871f4aaefd61a2d5ea419abf184fb9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799413028361652,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,PodSandboxId:211dd5511e79238af3341b082f48c5b2112b8b5252e8137cd108209115817a4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799413044255337,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,PodSandboxId:9cc6f09fa70898bc9564b713acbbc29a0abf989de26febea6d327c830d6fb059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799412900589265,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32,PodSandboxId:6cc22236962b7e07dc6152789dc9f79cddf68e0903a37dee43ceb2edb6537336,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710799119031320238,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1a5051d-e718-413e-a81f-e30f5ea70d42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.291002850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5056def4-150f-44da-9e0b-dc09a7fd3ff6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.291061801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5056def4-150f-44da-9e0b-dc09a7fd3ff6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.291391740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,PodSandboxId:602595fb2f2c5d54885221d7860233eb18e6f916bfea6fa242d4ec6e4143cd74,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799436002382334,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,PodSandboxId:e59b154fdcb690ee5d092c750083610ce85f7171cfda4cdfb540cf6728009f2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433963369730,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,PodSandboxId:67e1403b890e34cea4c112f7c9b20ebe59252d39d0b8e60b19028835d36a4d5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799433600202876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b64
02012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,PodSandboxId:bd90f5c62c645ba5c0297a07dbf8cff27017e9ac21595aaeaae0aa7861a72bca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799433718407713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531c
f2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,PodSandboxId:0901292a3664b1a15050f2dd3a9e84867cbac9c631a16d17612f801c38244946,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710799413035311980,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,PodSandboxId:975040c926ee84eefbd295605d433a35e871f4aaefd61a2d5ea419abf184fb9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799413028361652,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,PodSandboxId:211dd5511e79238af3341b082f48c5b2112b8b5252e8137cd108209115817a4d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799413044255337,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,PodSandboxId:9cc6f09fa70898bc9564b713acbbc29a0abf989de26febea6d327c830d6fb059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799412900589265,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32,PodSandboxId:6cc22236962b7e07dc6152789dc9f79cddf68e0903a37dee43ceb2edb6537336,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710799119031320238,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5056def4-150f-44da-9e0b-dc09a7fd3ff6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.293001138Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,Verbose:false,}" file="otel-collector/interceptors.go:62" id=112bebcf-37df-4456-b656-a2179fc721e4 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.293154195Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710799436067327477,StartedAt:1710799436116743843,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b08bb6c-9220-4ae9-83f9-0260b1e4a39f,},Annotations:map[string]string{io.kubernetes.container.hash: b5744539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3b08bb6c-9220-4ae9-83f9-0260b1e4a39f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3b08bb6c-9220-4ae9-83f9-0260b1e4a39f/containers/storage-provisioner/b45e32f5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/3b08bb6c-9220-4ae9-83f9-0260b1e4a39f/volumes/kubernetes.io~projected/kube-api-access-27czb,Readonly:true,SelinuxRelabel:fal
se,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_3b08bb6c-9220-4ae9-83f9-0260b1e4a39f/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=112bebcf-37df-4456-b656-a2179fc721e4 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.294191889Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=33f5248f-a7cb-4514-b86f-f0a6ced26e5c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.294298094Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710799434077573427,StartedAt:1710799434109276808,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-k675p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727682ae-0ac1-4854-a49c-0f6ae4384551,},Annotations:map[string]string{io.kubernetes.container.hash: 3c90677e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/727682ae-0ac1-4854-a49c-0f6ae4384551/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/727682ae-0ac1-4854-a49c-0f6ae4384551/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/727682ae-0ac1-4854-a49c-0f6ae4384551/containers/coredns/70019d4f,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/727682ae-0ac1-4854-a49c-0f6ae4384551/volumes/kubernetes.io~projected/kube-api-access-b5b5c,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-k675p_727682ae-0ac1-4854-a49c-0f6ae4384551/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=33f5248f-a7cb-4514-b86f-f0a6ced26e5c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.294882025Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fd25d462-191f-41f7-8141-2cc4121db9dc name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.294985152Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710799433947180350,StartedAt:1710799434030245234,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jltc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6402012-bfc2-4049-b813-a9fa547277a7,},Annotations:map[string]string{io.kubernetes.container.hash: 2ba32011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b6402012-bfc2-4049-b813-a9fa547277a7/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b6402012-bfc2-4049-b813-a9fa547277a7/containers/kube-proxy/cef11135,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var
/lib/kubelet/pods/b6402012-bfc2-4049-b813-a9fa547277a7/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/b6402012-bfc2-4049-b813-a9fa547277a7/volumes/kubernetes.io~projected/kube-api-access-755sg,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-jltc7_b6402012-bfc2-4049-b813-a9fa547277a7/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-
collector/interceptors.go:74" id=fd25d462-191f-41f7-8141-2cc4121db9dc name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.295426197Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,Verbose:false,}" file="otel-collector/interceptors.go:62" id=76889bd2-0b96-42da-8cb9-50b2fdf430ee name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.295523741Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710799433853784073,StartedAt:1710799433888311683,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-rlz67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: babdb200-b39a-4555-b14f-12e448531cf2,},Annotations:map[string]string{io.kubernetes.container.hash: e6431f3a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"c
ontainerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/babdb200-b39a-4555-b14f-12e448531cf2/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/babdb200-b39a-4555-b14f-12e448531cf2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/babdb200-b39a-4555-b14f-12e448531cf2/containers/coredns/2407f51a,Readonly:false,SelinuxRelabel:false,Propagatio
n:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/babdb200-b39a-4555-b14f-12e448531cf2/volumes/kubernetes.io~projected/kube-api-access-zqwn8,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-rlz67_babdb200-b39a-4555-b14f-12e448531cf2/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=76889bd2-0b96-42da-8cb9-50b2fdf430ee name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.296092366Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=238def58-be32-4016-a991-0c87ef589978 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.296175926Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710799413193217596,StartedAt:1710799413364737911,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3094571113789a04c5cdf076b52bc74,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b3094571113789a04c5cdf076b52bc74/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b3094571113789a04c5cdf076b52bc74/containers/kube-scheduler/9057e8c7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-141758_b3094571113789a04c5cdf076b52bc74/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{Cp
uPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=238def58-be32-4016-a991-0c87ef589978 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.296799835Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2eb9b76d-ef60-4686-9e82-77be329e98e4 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.297413706Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710799413139110409,StartedAt:1710799413337043439,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8245b2c678cff590c94d28a682833c6,},Annotations:map[string]string{io.kubernetes.container.hash: db50e89d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a8245b2c678cff590c94d28a682833c6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a8245b2c678cff590c94d28a682833c6/containers/etcd/8b8acaca,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd
-embed-certs-141758_a8245b2c678cff590c94d28a682833c6/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2eb9b76d-ef60-4686-9e82-77be329e98e4 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.298879162Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,Verbose:false,}" file="otel-collector/interceptors.go:62" id=97338a70-5f5c-4de4-abc4-83943da8486d name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.299002821Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710799413136067304,StartedAt:1710799413219623579,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cae75e1db3d3fe6e056036b5be55d8b,},Annotations:map[string]string{io.kubernetes.container.hash: 821f278a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9cae75e1db3d3fe6e056036b5be55d8b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9cae75e1db3d3fe6e056036b5be55d8b/containers/kube-apiserver/58a124b0,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Conta
inerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-141758_9cae75e1db3d3fe6e056036b5be55d8b/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=97338a70-5f5c-4de4-abc4-83943da8486d name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.300094154Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,Verbose:false,}" file="otel-collector/interceptors.go:62" id=73033227-ee7d-4285-9ef1-21903e56d097 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 22:19:00 embed-certs-141758 crio[705]: time="2024-03-18 22:19:00.300707979Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710799412995936460,StartedAt:1710799413099241837,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-141758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a8d56c6e28bc3e3badd3055c8311287,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/1a8d56c6e28bc3e3badd3055c8311287/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/1a8d56c6e28bc3e3badd3055c8311287/containers/kube-controller-manager/1b03be42,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-141758_1a8d56c6e28bc3e3badd3055c8311287/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,Cpus
etMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=73033227-ee7d-4285-9ef1-21903e56d097 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6088a2f461a8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   602595fb2f2c5       storage-provisioner
	9c4e1201e3144       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   e59b154fdcb69       coredns-5dd5756b68-k675p
	45649fa931945       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   bd90f5c62c645       coredns-5dd5756b68-rlz67
	e06a57a587ded       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   67e1403b890e3       kube-proxy-jltc7
	736909ef62e4a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   211dd5511e792       kube-apiserver-embed-certs-141758
	9c38655e32331       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   0901292a3664b       kube-scheduler-embed-certs-141758
	15d9a82debe78       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   975040c926ee8       etcd-embed-certs-141758
	d2f95eb902268       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   9cc6f09fa7089       kube-controller-manager-embed-certs-141758
	d9a6d9741a5d9       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   20 minutes ago      Exited              kube-apiserver            1                   6cc22236962b7       kube-apiserver-embed-certs-141758
	
	
	==> coredns [45649fa931945444af3fdc4ff150e898b886c98e41ae5400ff09a0bc43d5b125] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [9c4e1201e31449227cb3bbf62745fb64a9b6ded1ceac595dc7f5857acc2c0a3e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-141758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-141758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=embed-certs-141758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 22:03:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-141758
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:18:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:14:12 +0000   Mon, 18 Mar 2024 22:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:14:12 +0000   Mon, 18 Mar 2024 22:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:14:12 +0000   Mon, 18 Mar 2024 22:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:14:12 +0000   Mon, 18 Mar 2024 22:03:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    embed-certs-141758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e172d7ce4dbf4889bb535d39511e5f70
	  System UUID:                e172d7ce-4dbf-4889-bb53-5d39511e5f70
	  Boot ID:                    8756b731-c73d-436f-a7c4-89b722bcc512
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-k675p                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-rlz67                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-141758                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-141758             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-141758    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-jltc7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-141758             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-pmkgs               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-141758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-141758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-141758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-141758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-141758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-141758 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-141758 event: Registered Node embed-certs-141758 in Controller
	
	
	==> dmesg <==
	[  +0.052228] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042480] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.560295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.408996] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.433155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.922761] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.055960] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059418] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.207689] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.134038] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.320089] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +5.420524] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.065989] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.004918] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +5.645939] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.406574] kauditd_printk_skb: 74 callbacks suppressed
	[Mar18 22:03] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.553931] systemd-fstab-generator[3429]: Ignoring "noauto" option for root device
	[  +7.268791] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.085166] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.874019] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.107369] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 22:04] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [15d9a82debe781aa5a1980cdfaeae5f83c2680dc577773cabcf6be3b25428e36] <==
	{"level":"info","ts":"2024-03-18T22:03:34.165976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T22:03:34.165992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 1"}
	{"level":"info","ts":"2024-03-18T22:03:34.166003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.166008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.166016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.166023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 2"}
	{"level":"info","ts":"2024-03-18T22:03:34.169253Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:03:34.172234Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:embed-certs-141758 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T22:03:34.172284Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:03:34.175367Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T22:03:34.177922Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:03:34.178013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:03:34.179124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
	{"level":"info","ts":"2024-03-18T22:03:34.198884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T22:03:34.19896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T22:03:34.178032Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:03:34.200928Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:13:34.254531Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":711}
	{"level":"info","ts":"2024-03-18T22:13:34.257377Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":711,"took":"2.004281ms","hash":2401369912}
	{"level":"info","ts":"2024-03-18T22:13:34.257457Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2401369912,"revision":711,"compact-revision":-1}
	{"level":"warn","ts":"2024-03-18T22:18:21.528438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.319156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T22:18:21.529122Z","caller":"traceutil/trace.go:171","msg":"trace[1904851606] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1187; }","duration":"127.09576ms","start":"2024-03-18T22:18:21.401987Z","end":"2024-03-18T22:18:21.529083Z","steps":["trace[1904851606] 'range keys from in-memory index tree'  (duration: 126.234945ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:18:34.262581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":954}
	{"level":"info","ts":"2024-03-18T22:18:34.264292Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":954,"took":"1.261118ms","hash":675708986}
	{"level":"info","ts":"2024-03-18T22:18:34.264361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":675708986,"revision":954,"compact-revision":711}
	
	
	==> kernel <==
	 22:19:00 up 20 min,  0 users,  load average: 0.25, 0.22, 0.13
	Linux embed-certs-141758 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [736909ef62e4a605a54d2764272f825335bfac450a78382e34e51cfbf6c95e74] <==
	E0318 22:14:36.979123       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:14:36.979166       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:15:35.851052       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 22:16:35.851154       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:16:36.978652       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:16:36.979018       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:16:36.979077       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:16:36.979711       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:16:36.979959       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:16:36.980163       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:17:35.851192       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 22:18:35.850713       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:18:35.982968       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:18:35.983178       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:18:35.983796       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:18:36.983910       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:18:36.984010       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:18:36.984021       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:18:36.984151       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:18:36.984183       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:18:36.985382       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d9a6d9741a5d9b35c1eb6100662502fc8bc73108438c5425a6a607785035fb32] <==
	W0318 22:03:25.908685       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:25.915247       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:25.920337       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:25.925924       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.019283       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.052188       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.083004       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.090074       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.104547       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.204406       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.217137       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.250700       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.368985       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.419754       1 logging.go:59] [core] [Channel #8 SubChannel #9] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.445166       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.481211       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.519084       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.524030       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.540529       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.572367       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.754499       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.838170       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.943966       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:26.955545       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 22:03:27.166944       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d2f95eb902268ca6cb5ba113ebf807c6e36e200da4f81deb61522b2284917344] <==
	I0318 22:13:22.474997       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:13:51.981081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:13:52.484200       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:14:21.987030       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:14:22.492505       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:14:52.000080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:14:52.504437       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 22:15:06.318086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="384.378µs"
	I0318 22:15:17.317906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="254.819µs"
	E0318 22:15:22.006362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:15:22.512610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:15:52.012126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:15:52.522497       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:16:22.017930       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:16:22.532209       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:16:52.024550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:16:52.542743       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:17:22.030990       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:17:22.552583       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:17:52.037253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:17:52.564426       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:18:22.042974       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:18:22.575472       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:18:52.049707       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:18:52.585612       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e06a57a587dedf9f88e9ae899236a913fb0e9b956b2e9e44f1e4407917aa86c3] <==
	I0318 22:03:54.264948       1 server_others.go:69] "Using iptables proxy"
	I0318 22:03:54.289897       1 node.go:141] Successfully retrieved node IP: 192.168.39.243
	I0318 22:03:54.353050       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 22:03:54.353101       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 22:03:54.360240       1 server_others.go:152] "Using iptables Proxier"
	I0318 22:03:54.360607       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 22:03:54.360978       1 server.go:846] "Version info" version="v1.28.4"
	I0318 22:03:54.361015       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 22:03:54.362944       1 config.go:188] "Starting service config controller"
	I0318 22:03:54.363574       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 22:03:54.363700       1 config.go:97] "Starting endpoint slice config controller"
	I0318 22:03:54.363733       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 22:03:54.366285       1 config.go:315] "Starting node config controller"
	I0318 22:03:54.366346       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 22:03:54.463782       1 shared_informer.go:318] Caches are synced for service config
	I0318 22:03:54.463935       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 22:03:54.466934       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9c38655e323316c088d533a180ad0e0880299d40a2779c43bff840ceb5a2999f] <==
	W0318 22:03:35.984022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 22:03:35.984056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 22:03:36.788532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 22:03:36.788662       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 22:03:36.792166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 22:03:36.792230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 22:03:36.832206       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:03:36.832418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:03:36.881258       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 22:03:36.881312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 22:03:37.061468       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 22:03:37.061537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 22:03:37.086124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 22:03:37.086157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 22:03:37.171922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 22:03:37.172120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 22:03:37.179115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 22:03:37.179251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 22:03:37.220550       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 22:03:37.220670       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 22:03:37.247895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 22:03:37.248021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 22:03:37.541683       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 22:03:37.541783       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 22:03:39.175030       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:16:39 embed-certs-141758 kubelet[3762]: E0318 22:16:39.343165    3762 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:16:39 embed-certs-141758 kubelet[3762]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:16:39 embed-certs-141758 kubelet[3762]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:16:39 embed-certs-141758 kubelet[3762]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:16:39 embed-certs-141758 kubelet[3762]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:16:49 embed-certs-141758 kubelet[3762]: E0318 22:16:49.301118    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:17:03 embed-certs-141758 kubelet[3762]: E0318 22:17:03.301488    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:17:16 embed-certs-141758 kubelet[3762]: E0318 22:17:16.301742    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:17:31 embed-certs-141758 kubelet[3762]: E0318 22:17:31.301480    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:17:39 embed-certs-141758 kubelet[3762]: E0318 22:17:39.338446    3762 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:17:39 embed-certs-141758 kubelet[3762]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:17:39 embed-certs-141758 kubelet[3762]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:17:39 embed-certs-141758 kubelet[3762]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:17:39 embed-certs-141758 kubelet[3762]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:17:46 embed-certs-141758 kubelet[3762]: E0318 22:17:46.301654    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:18:01 embed-certs-141758 kubelet[3762]: E0318 22:18:01.300770    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:18:13 embed-certs-141758 kubelet[3762]: E0318 22:18:13.304708    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:18:24 embed-certs-141758 kubelet[3762]: E0318 22:18:24.300660    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:18:35 embed-certs-141758 kubelet[3762]: E0318 22:18:35.302185    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	Mar 18 22:18:39 embed-certs-141758 kubelet[3762]: E0318 22:18:39.338097    3762 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:18:39 embed-certs-141758 kubelet[3762]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:18:39 embed-certs-141758 kubelet[3762]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:18:39 embed-certs-141758 kubelet[3762]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:18:39 embed-certs-141758 kubelet[3762]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:18:47 embed-certs-141758 kubelet[3762]: E0318 22:18:47.302694    3762 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-pmkgs" podUID="e180b0c7-9efd-4063-b7be-9947b5f9522d"
	
	
	==> storage-provisioner [6088a2f461a8b3e68c7bf21551ae1577553d716a35b10e96444b419604e23985] <==
	I0318 22:03:56.158927       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:03:56.206617       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:03:56.206957       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:03:56.222360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:03:56.222615       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-141758_4cea16bc-7fbd-474e-935d-3b49dad23a05!
	I0318 22:03:56.225369       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"adea3981-c21e-473a-8b08-b95449c8a583", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-141758_4cea16bc-7fbd-474e-935d-3b49dad23a05 became leader
	I0318 22:03:56.323777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-141758_4cea16bc-7fbd-474e-935d-3b49dad23a05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-141758 -n embed-certs-141758
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-141758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-pmkgs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-141758 describe pod metrics-server-57f55c9bc5-pmkgs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-141758 describe pod metrics-server-57f55c9bc5-pmkgs: exit status 1 (69.126841ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-pmkgs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-141758 describe pod metrics-server-57f55c9bc5-pmkgs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (359.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (332.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0318 22:13:08.305902   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963041 -n no-preload-963041
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 22:18:38.646168208 +0000 UTC m=+6575.547956663
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-963041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-963041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.66µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-963041 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963041 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-963041 logs -n 25: (1.395096313s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 22:17 UTC | 18 Mar 24 22:17 UTC |
	| start   | -p newest-cni-962491 --memory=2200 --alsologtostderr   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:17 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:17:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:17:47.026845   70885 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:17:47.026966   70885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:17:47.026979   70885 out.go:304] Setting ErrFile to fd 2...
	I0318 22:17:47.026987   70885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:17:47.027191   70885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 22:17:47.027810   70885 out.go:298] Setting JSON to false
	I0318 22:17:47.028774   70885 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7211,"bootTime":1710793056,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:17:47.028851   70885 start.go:139] virtualization: kvm guest
	I0318 22:17:47.031174   70885 out.go:177] * [newest-cni-962491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:17:47.033011   70885 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 22:17:47.033007   70885 notify.go:220] Checking for updates...
	I0318 22:17:47.034667   70885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:17:47.036315   70885 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:17:47.037558   70885 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 22:17:47.038668   70885 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 22:17:47.039949   70885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 22:17:47.041581   70885 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:17:47.041693   70885 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:17:47.041810   70885 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 22:17:47.041951   70885 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:17:47.080259   70885 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 22:17:47.081569   70885 start.go:297] selected driver: kvm2
	I0318 22:17:47.081584   70885 start.go:901] validating driver "kvm2" against <nil>
	I0318 22:17:47.081593   70885 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 22:17:47.082417   70885 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:17:47.082496   70885 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:17:47.096859   70885 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:17:47.096922   70885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 22:17:47.096955   70885 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 22:17:47.097198   70885 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 22:17:47.097267   70885 cni.go:84] Creating CNI manager for ""
	I0318 22:17:47.097285   70885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:17:47.097302   70885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 22:17:47.097374   70885 start.go:340] cluster config:
	{Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:17:47.097522   70885 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:17:47.099748   70885 out.go:177] * Starting "newest-cni-962491" primary control-plane node in "newest-cni-962491" cluster
	I0318 22:17:47.100990   70885 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 22:17:47.101023   70885 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 22:17:47.101035   70885 cache.go:56] Caching tarball of preloaded images
	I0318 22:17:47.101102   70885 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 22:17:47.101112   70885 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 22:17:47.101194   70885 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/config.json ...
	I0318 22:17:47.101210   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/config.json: {Name:mkdcc6aac67d984e310c359eec4040aad6c08d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:17:47.101352   70885 start.go:360] acquireMachinesLock for newest-cni-962491: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 22:17:47.101380   70885 start.go:364] duration metric: took 14.213µs to acquireMachinesLock for "newest-cni-962491"
	I0318 22:17:47.101393   70885 start.go:93] Provisioning new machine with config: &{Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:17:47.101454   70885 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 22:17:47.103022   70885 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 22:17:47.103150   70885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:17:47.103181   70885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:17:47.117469   70885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I0318 22:17:47.117970   70885 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:17:47.118535   70885 main.go:141] libmachine: Using API Version  1
	I0318 22:17:47.118559   70885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:17:47.118872   70885 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:17:47.119102   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:17:47.119437   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:17:47.119617   70885 start.go:159] libmachine.API.Create for "newest-cni-962491" (driver="kvm2")
	I0318 22:17:47.119662   70885 client.go:168] LocalClient.Create starting
	I0318 22:17:47.119696   70885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem
	I0318 22:17:47.119727   70885 main.go:141] libmachine: Decoding PEM data...
	I0318 22:17:47.119747   70885 main.go:141] libmachine: Parsing certificate...
	I0318 22:17:47.119792   70885 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem
	I0318 22:17:47.119810   70885 main.go:141] libmachine: Decoding PEM data...
	I0318 22:17:47.119818   70885 main.go:141] libmachine: Parsing certificate...
	I0318 22:17:47.119833   70885 main.go:141] libmachine: Running pre-create checks...
	I0318 22:17:47.119839   70885 main.go:141] libmachine: (newest-cni-962491) Calling .PreCreateCheck
	I0318 22:17:47.120408   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetConfigRaw
	I0318 22:17:47.120875   70885 main.go:141] libmachine: Creating machine...
	I0318 22:17:47.120889   70885 main.go:141] libmachine: (newest-cni-962491) Calling .Create
	I0318 22:17:47.121054   70885 main.go:141] libmachine: (newest-cni-962491) Creating KVM machine...
	I0318 22:17:47.122460   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found existing default KVM network
	I0318 22:17:47.123609   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.123430   70908 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:21:ea} reservation:<nil>}
	I0318 22:17:47.124565   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.124476   70908 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:5b:75} reservation:<nil>}
	I0318 22:17:47.125949   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.125866   70908 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000213c90}
	I0318 22:17:47.125973   70885 main.go:141] libmachine: (newest-cni-962491) DBG | created network xml: 
	I0318 22:17:47.125982   70885 main.go:141] libmachine: (newest-cni-962491) DBG | <network>
	I0318 22:17:47.125992   70885 main.go:141] libmachine: (newest-cni-962491) DBG |   <name>mk-newest-cni-962491</name>
	I0318 22:17:47.126000   70885 main.go:141] libmachine: (newest-cni-962491) DBG |   <dns enable='no'/>
	I0318 22:17:47.126009   70885 main.go:141] libmachine: (newest-cni-962491) DBG |   
	I0318 22:17:47.126018   70885 main.go:141] libmachine: (newest-cni-962491) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0318 22:17:47.126027   70885 main.go:141] libmachine: (newest-cni-962491) DBG |     <dhcp>
	I0318 22:17:47.126037   70885 main.go:141] libmachine: (newest-cni-962491) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0318 22:17:47.126045   70885 main.go:141] libmachine: (newest-cni-962491) DBG |     </dhcp>
	I0318 22:17:47.126064   70885 main.go:141] libmachine: (newest-cni-962491) DBG |   </ip>
	I0318 22:17:47.126071   70885 main.go:141] libmachine: (newest-cni-962491) DBG |   
	I0318 22:17:47.126086   70885 main.go:141] libmachine: (newest-cni-962491) DBG | </network>
	I0318 22:17:47.126095   70885 main.go:141] libmachine: (newest-cni-962491) DBG | 
	I0318 22:17:47.131367   70885 main.go:141] libmachine: (newest-cni-962491) DBG | trying to create private KVM network mk-newest-cni-962491 192.168.61.0/24...
	I0318 22:17:47.206683   70885 main.go:141] libmachine: (newest-cni-962491) DBG | private KVM network mk-newest-cni-962491 192.168.61.0/24 created
	I0318 22:17:47.206718   70885 main.go:141] libmachine: (newest-cni-962491) Setting up store path in /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491 ...
	I0318 22:17:47.206750   70885 main.go:141] libmachine: (newest-cni-962491) Building disk image from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 22:17:47.206767   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.206739   70908 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 22:17:47.206881   70885 main.go:141] libmachine: (newest-cni-962491) Downloading /home/jenkins/minikube-integration/18421-5321/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso...
	I0318 22:17:47.452630   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.452513   70908 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa...
	I0318 22:17:47.589248   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.589091   70908 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/newest-cni-962491.rawdisk...
	I0318 22:17:47.589275   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Writing magic tar header
	I0318 22:17:47.589292   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Writing SSH key tar header
	I0318 22:17:47.589305   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:47.589248   70908 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491 ...
	I0318 22:17:47.589327   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491
	I0318 22:17:47.589406   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube/machines
	I0318 22:17:47.589437   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 22:17:47.589452   70885 main.go:141] libmachine: (newest-cni-962491) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491 (perms=drwx------)
	I0318 22:17:47.589514   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18421-5321
	I0318 22:17:47.589541   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 22:17:47.589561   70885 main.go:141] libmachine: (newest-cni-962491) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube/machines (perms=drwxr-xr-x)
	I0318 22:17:47.589579   70885 main.go:141] libmachine: (newest-cni-962491) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321/.minikube (perms=drwxr-xr-x)
	I0318 22:17:47.589593   70885 main.go:141] libmachine: (newest-cni-962491) Setting executable bit set on /home/jenkins/minikube-integration/18421-5321 (perms=drwxrwxr-x)
	I0318 22:17:47.589610   70885 main.go:141] libmachine: (newest-cni-962491) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 22:17:47.589616   70885 main.go:141] libmachine: (newest-cni-962491) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 22:17:47.589622   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home/jenkins
	I0318 22:17:47.589627   70885 main.go:141] libmachine: (newest-cni-962491) Creating domain...
	I0318 22:17:47.589650   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Checking permissions on dir: /home
	I0318 22:17:47.589664   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Skipping /home - not owner
	I0318 22:17:47.590909   70885 main.go:141] libmachine: (newest-cni-962491) define libvirt domain using xml: 
	I0318 22:17:47.590930   70885 main.go:141] libmachine: (newest-cni-962491) <domain type='kvm'>
	I0318 22:17:47.590941   70885 main.go:141] libmachine: (newest-cni-962491)   <name>newest-cni-962491</name>
	I0318 22:17:47.590949   70885 main.go:141] libmachine: (newest-cni-962491)   <memory unit='MiB'>2200</memory>
	I0318 22:17:47.590963   70885 main.go:141] libmachine: (newest-cni-962491)   <vcpu>2</vcpu>
	I0318 22:17:47.590973   70885 main.go:141] libmachine: (newest-cni-962491)   <features>
	I0318 22:17:47.590984   70885 main.go:141] libmachine: (newest-cni-962491)     <acpi/>
	I0318 22:17:47.591006   70885 main.go:141] libmachine: (newest-cni-962491)     <apic/>
	I0318 22:17:47.591019   70885 main.go:141] libmachine: (newest-cni-962491)     <pae/>
	I0318 22:17:47.591033   70885 main.go:141] libmachine: (newest-cni-962491)     
	I0318 22:17:47.591043   70885 main.go:141] libmachine: (newest-cni-962491)   </features>
	I0318 22:17:47.591054   70885 main.go:141] libmachine: (newest-cni-962491)   <cpu mode='host-passthrough'>
	I0318 22:17:47.591066   70885 main.go:141] libmachine: (newest-cni-962491)   
	I0318 22:17:47.591076   70885 main.go:141] libmachine: (newest-cni-962491)   </cpu>
	I0318 22:17:47.591087   70885 main.go:141] libmachine: (newest-cni-962491)   <os>
	I0318 22:17:47.591104   70885 main.go:141] libmachine: (newest-cni-962491)     <type>hvm</type>
	I0318 22:17:47.591118   70885 main.go:141] libmachine: (newest-cni-962491)     <boot dev='cdrom'/>
	I0318 22:17:47.591131   70885 main.go:141] libmachine: (newest-cni-962491)     <boot dev='hd'/>
	I0318 22:17:47.591140   70885 main.go:141] libmachine: (newest-cni-962491)     <bootmenu enable='no'/>
	I0318 22:17:47.591147   70885 main.go:141] libmachine: (newest-cni-962491)   </os>
	I0318 22:17:47.591158   70885 main.go:141] libmachine: (newest-cni-962491)   <devices>
	I0318 22:17:47.591166   70885 main.go:141] libmachine: (newest-cni-962491)     <disk type='file' device='cdrom'>
	I0318 22:17:47.591210   70885 main.go:141] libmachine: (newest-cni-962491)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/boot2docker.iso'/>
	I0318 22:17:47.591234   70885 main.go:141] libmachine: (newest-cni-962491)       <target dev='hdc' bus='scsi'/>
	I0318 22:17:47.591249   70885 main.go:141] libmachine: (newest-cni-962491)       <readonly/>
	I0318 22:17:47.591260   70885 main.go:141] libmachine: (newest-cni-962491)     </disk>
	I0318 22:17:47.591274   70885 main.go:141] libmachine: (newest-cni-962491)     <disk type='file' device='disk'>
	I0318 22:17:47.591287   70885 main.go:141] libmachine: (newest-cni-962491)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 22:17:47.591344   70885 main.go:141] libmachine: (newest-cni-962491)       <source file='/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/newest-cni-962491.rawdisk'/>
	I0318 22:17:47.591371   70885 main.go:141] libmachine: (newest-cni-962491)       <target dev='hda' bus='virtio'/>
	I0318 22:17:47.591381   70885 main.go:141] libmachine: (newest-cni-962491)     </disk>
	I0318 22:17:47.591395   70885 main.go:141] libmachine: (newest-cni-962491)     <interface type='network'>
	I0318 22:17:47.591406   70885 main.go:141] libmachine: (newest-cni-962491)       <source network='mk-newest-cni-962491'/>
	I0318 22:17:47.591416   70885 main.go:141] libmachine: (newest-cni-962491)       <model type='virtio'/>
	I0318 22:17:47.591426   70885 main.go:141] libmachine: (newest-cni-962491)     </interface>
	I0318 22:17:47.591435   70885 main.go:141] libmachine: (newest-cni-962491)     <interface type='network'>
	I0318 22:17:47.591446   70885 main.go:141] libmachine: (newest-cni-962491)       <source network='default'/>
	I0318 22:17:47.591456   70885 main.go:141] libmachine: (newest-cni-962491)       <model type='virtio'/>
	I0318 22:17:47.591468   70885 main.go:141] libmachine: (newest-cni-962491)     </interface>
	I0318 22:17:47.591477   70885 main.go:141] libmachine: (newest-cni-962491)     <serial type='pty'>
	I0318 22:17:47.591485   70885 main.go:141] libmachine: (newest-cni-962491)       <target port='0'/>
	I0318 22:17:47.591494   70885 main.go:141] libmachine: (newest-cni-962491)     </serial>
	I0318 22:17:47.591505   70885 main.go:141] libmachine: (newest-cni-962491)     <console type='pty'>
	I0318 22:17:47.591516   70885 main.go:141] libmachine: (newest-cni-962491)       <target type='serial' port='0'/>
	I0318 22:17:47.591527   70885 main.go:141] libmachine: (newest-cni-962491)     </console>
	I0318 22:17:47.591541   70885 main.go:141] libmachine: (newest-cni-962491)     <rng model='virtio'>
	I0318 22:17:47.591553   70885 main.go:141] libmachine: (newest-cni-962491)       <backend model='random'>/dev/random</backend>
	I0318 22:17:47.591563   70885 main.go:141] libmachine: (newest-cni-962491)     </rng>
	I0318 22:17:47.591571   70885 main.go:141] libmachine: (newest-cni-962491)     
	I0318 22:17:47.591579   70885 main.go:141] libmachine: (newest-cni-962491)     
	I0318 22:17:47.591587   70885 main.go:141] libmachine: (newest-cni-962491)   </devices>
	I0318 22:17:47.591616   70885 main.go:141] libmachine: (newest-cni-962491) </domain>
	I0318 22:17:47.591634   70885 main.go:141] libmachine: (newest-cni-962491) 
	I0318 22:17:47.596743   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:42:a1:54 in network default
	I0318 22:17:47.597273   70885 main.go:141] libmachine: (newest-cni-962491) Ensuring networks are active...
	I0318 22:17:47.597297   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:47.597996   70885 main.go:141] libmachine: (newest-cni-962491) Ensuring network default is active
	I0318 22:17:47.598322   70885 main.go:141] libmachine: (newest-cni-962491) Ensuring network mk-newest-cni-962491 is active
	I0318 22:17:47.598996   70885 main.go:141] libmachine: (newest-cni-962491) Getting domain xml...
	I0318 22:17:47.599746   70885 main.go:141] libmachine: (newest-cni-962491) Creating domain...
	I0318 22:17:48.898231   70885 main.go:141] libmachine: (newest-cni-962491) Waiting to get IP...
	I0318 22:17:48.898922   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:48.899319   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:48.899348   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:48.899275   70908 retry.go:31] will retry after 202.462758ms: waiting for machine to come up
	I0318 22:17:49.103653   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:49.104172   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:49.104204   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:49.104134   70908 retry.go:31] will retry after 238.554098ms: waiting for machine to come up
	I0318 22:17:49.344645   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:49.345150   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:49.345177   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:49.345092   70908 retry.go:31] will retry after 432.776708ms: waiting for machine to come up
	I0318 22:17:49.779862   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:49.780337   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:49.780366   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:49.780281   70908 retry.go:31] will retry after 458.278966ms: waiting for machine to come up
	I0318 22:17:50.239855   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:50.240327   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:50.240355   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:50.240289   70908 retry.go:31] will retry after 727.971415ms: waiting for machine to come up
	I0318 22:17:50.970065   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:50.970561   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:50.970604   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:50.970515   70908 retry.go:31] will retry after 920.778037ms: waiting for machine to come up
	I0318 22:17:51.892314   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:51.892763   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:51.892786   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:51.892720   70908 retry.go:31] will retry after 1.139814145s: waiting for machine to come up
	I0318 22:17:53.034121   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:53.034651   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:53.034675   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:53.034596   70908 retry.go:31] will retry after 961.912186ms: waiting for machine to come up
	I0318 22:17:53.997578   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:53.998005   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:53.998032   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:53.997972   70908 retry.go:31] will retry after 1.622479118s: waiting for machine to come up
	I0318 22:17:55.622306   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:55.622782   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:55.622812   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:55.622721   70908 retry.go:31] will retry after 1.734247342s: waiting for machine to come up
	I0318 22:17:57.359158   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:57.359651   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:57.359676   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:57.359613   70908 retry.go:31] will retry after 2.376694261s: waiting for machine to come up
	I0318 22:17:59.738932   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:17:59.739457   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:17:59.739488   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:17:59.739401   70908 retry.go:31] will retry after 3.177919462s: waiting for machine to come up
	I0318 22:18:02.919123   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:02.919558   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:18:02.919577   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:18:02.919521   70908 retry.go:31] will retry after 4.168497434s: waiting for machine to come up
	I0318 22:18:07.092752   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:07.093252   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:18:07.093280   70885 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:18:07.093200   70908 retry.go:31] will retry after 3.742395442s: waiting for machine to come up
	I0318 22:18:10.838436   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:10.838860   70885 main.go:141] libmachine: (newest-cni-962491) Found IP for machine: 192.168.61.192
	I0318 22:18:10.838905   70885 main.go:141] libmachine: (newest-cni-962491) Reserving static IP address...
	I0318 22:18:10.838919   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has current primary IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:10.839277   70885 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find host DHCP lease matching {name: "newest-cni-962491", mac: "52:54:00:0a:88:16", ip: "192.168.61.192"} in network mk-newest-cni-962491
	I0318 22:18:10.914127   70885 main.go:141] libmachine: (newest-cni-962491) Reserved static IP address: 192.168.61.192
	I0318 22:18:10.914158   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Getting to WaitForSSH function...
	I0318 22:18:10.914192   70885 main.go:141] libmachine: (newest-cni-962491) Waiting for SSH to be available...
	I0318 22:18:10.917222   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:10.917700   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:10.917741   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:10.917871   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Using SSH client type: external
	I0318 22:18:10.917907   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa (-rw-------)
	I0318 22:18:10.917953   70885 main.go:141] libmachine: (newest-cni-962491) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 22:18:10.917970   70885 main.go:141] libmachine: (newest-cni-962491) DBG | About to run SSH command:
	I0318 22:18:10.917983   70885 main.go:141] libmachine: (newest-cni-962491) DBG | exit 0
	I0318 22:18:11.045246   70885 main.go:141] libmachine: (newest-cni-962491) DBG | SSH cmd err, output: <nil>: 
	I0318 22:18:11.045489   70885 main.go:141] libmachine: (newest-cni-962491) KVM machine creation complete!
	I0318 22:18:11.045835   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetConfigRaw
	I0318 22:18:11.046432   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:11.046639   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:11.046836   70885 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 22:18:11.046857   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:18:11.048187   70885 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 22:18:11.048201   70885 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 22:18:11.048207   70885 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 22:18:11.048213   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.050645   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.051053   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.051082   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.051213   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:11.051381   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.051549   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.051732   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:11.051913   70885 main.go:141] libmachine: Using SSH client type: native
	I0318 22:18:11.052115   70885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:18:11.052133   70885 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 22:18:11.164705   70885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 22:18:11.164734   70885 main.go:141] libmachine: Detecting the provisioner...
	I0318 22:18:11.164745   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.167744   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.168111   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.168143   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.168257   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:11.168463   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.168638   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.168803   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:11.169020   70885 main.go:141] libmachine: Using SSH client type: native
	I0318 22:18:11.169205   70885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:18:11.169216   70885 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 22:18:11.278502   70885 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 22:18:11.278567   70885 main.go:141] libmachine: found compatible host: buildroot
	I0318 22:18:11.278577   70885 main.go:141] libmachine: Provisioning with buildroot...
	I0318 22:18:11.278587   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:18:11.278827   70885 buildroot.go:166] provisioning hostname "newest-cni-962491"
	I0318 22:18:11.278854   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:18:11.279089   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.281787   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.282261   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.282289   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.282450   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:11.282634   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.282792   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.282897   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:11.283046   70885 main.go:141] libmachine: Using SSH client type: native
	I0318 22:18:11.283240   70885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:18:11.283255   70885 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-962491 && echo "newest-cni-962491" | sudo tee /etc/hostname
	I0318 22:18:11.404938   70885 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-962491
	
	I0318 22:18:11.404973   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.407771   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.408186   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.408214   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.408380   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:11.408538   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.408656   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.408743   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:11.408883   70885 main.go:141] libmachine: Using SSH client type: native
	I0318 22:18:11.409090   70885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:18:11.409110   70885 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-962491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-962491/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-962491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 22:18:11.528531   70885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 22:18:11.528568   70885 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 22:18:11.528588   70885 buildroot.go:174] setting up certificates
	I0318 22:18:11.528612   70885 provision.go:84] configureAuth start
	I0318 22:18:11.528624   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:18:11.528946   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:18:11.531578   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.532008   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.532038   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.532159   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.534580   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.534958   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.534987   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.535135   70885 provision.go:143] copyHostCerts
	I0318 22:18:11.535229   70885 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 22:18:11.535244   70885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 22:18:11.535341   70885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 22:18:11.535509   70885 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 22:18:11.535523   70885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 22:18:11.535560   70885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 22:18:11.535635   70885 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 22:18:11.535643   70885 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 22:18:11.535673   70885 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 22:18:11.535723   70885 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.newest-cni-962491 san=[127.0.0.1 192.168.61.192 localhost minikube newest-cni-962491]
	I0318 22:18:11.689441   70885 provision.go:177] copyRemoteCerts
	I0318 22:18:11.689493   70885 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 22:18:11.689515   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.692247   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.692576   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.692607   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.692849   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:11.693050   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.693229   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:11.693362   70885 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:18:11.780887   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 22:18:11.810075   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 22:18:11.840567   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 22:18:11.871425   70885 provision.go:87] duration metric: took 342.801562ms to configureAuth
	I0318 22:18:11.871460   70885 buildroot.go:189] setting minikube options for container-runtime
	I0318 22:18:11.871679   70885 config.go:182] Loaded profile config "newest-cni-962491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 22:18:11.871757   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:11.874465   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.874819   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:11.874863   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:11.875060   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:11.875288   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.875520   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:11.875709   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:11.875864   70885 main.go:141] libmachine: Using SSH client type: native
	I0318 22:18:11.876084   70885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:18:11.876112   70885 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 22:18:12.175611   70885 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 22:18:12.175645   70885 main.go:141] libmachine: Checking connection to Docker...
	I0318 22:18:12.175657   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetURL
	I0318 22:18:12.177039   70885 main.go:141] libmachine: (newest-cni-962491) DBG | Using libvirt version 6000000
	I0318 22:18:12.179576   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.179999   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.180031   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.180243   70885 main.go:141] libmachine: Docker is up and running!
	I0318 22:18:12.180260   70885 main.go:141] libmachine: Reticulating splines...
	I0318 22:18:12.180268   70885 client.go:171] duration metric: took 25.060594074s to LocalClient.Create
	I0318 22:18:12.180310   70885 start.go:167] duration metric: took 25.06069354s to libmachine.API.Create "newest-cni-962491"
	I0318 22:18:12.180322   70885 start.go:293] postStartSetup for "newest-cni-962491" (driver="kvm2")
	I0318 22:18:12.180342   70885 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 22:18:12.180390   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:12.180671   70885 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 22:18:12.180701   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:12.183289   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.183715   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.183741   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.183908   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:12.184094   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:12.184273   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:12.184464   70885 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:18:12.272230   70885 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 22:18:12.277189   70885 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 22:18:12.277217   70885 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 22:18:12.277280   70885 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 22:18:12.277372   70885 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 22:18:12.277462   70885 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 22:18:12.287855   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 22:18:12.316690   70885 start.go:296] duration metric: took 136.350369ms for postStartSetup
	I0318 22:18:12.316738   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetConfigRaw
	I0318 22:18:12.317336   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:18:12.320019   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.320396   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.320421   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.320616   70885 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/config.json ...
	I0318 22:18:12.320777   70885 start.go:128] duration metric: took 25.21931488s to createHost
	I0318 22:18:12.320795   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:12.323166   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.323555   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.323583   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.323767   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:12.323946   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:12.324105   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:12.324259   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:12.324454   70885 main.go:141] libmachine: Using SSH client type: native
	I0318 22:18:12.324611   70885 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:18:12.324622   70885 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 22:18:12.438463   70885 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710800292.423019273
	
	I0318 22:18:12.438484   70885 fix.go:216] guest clock: 1710800292.423019273
	I0318 22:18:12.438494   70885 fix.go:229] Guest: 2024-03-18 22:18:12.423019273 +0000 UTC Remote: 2024-03-18 22:18:12.320787449 +0000 UTC m=+25.340963245 (delta=102.231824ms)
	I0318 22:18:12.438538   70885 fix.go:200] guest clock delta is within tolerance: 102.231824ms
	I0318 22:18:12.438547   70885 start.go:83] releasing machines lock for "newest-cni-962491", held for 25.337160094s
	I0318 22:18:12.438567   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:12.438841   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:18:12.441603   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.442093   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.442115   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.442344   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:12.442941   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:12.443156   70885 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:12.443256   70885 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 22:18:12.443307   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:12.443337   70885 ssh_runner.go:195] Run: cat /version.json
	I0318 22:18:12.443360   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:18:12.446209   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.446573   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.446595   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.446624   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.446852   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:12.447046   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:12.447209   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:12.447277   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:12.447298   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:12.447354   70885 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:18:12.447668   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:18:12.447815   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:18:12.447930   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:18:12.448091   70885 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:18:12.547815   70885 ssh_runner.go:195] Run: systemctl --version
	I0318 22:18:12.554531   70885 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 22:18:12.726737   70885 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 22:18:12.733551   70885 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 22:18:12.733616   70885 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 22:18:12.751965   70885 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 22:18:12.751986   70885 start.go:494] detecting cgroup driver to use...
	I0318 22:18:12.752035   70885 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 22:18:12.770198   70885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 22:18:12.786593   70885 docker.go:217] disabling cri-docker service (if available) ...
	I0318 22:18:12.786665   70885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 22:18:12.802050   70885 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 22:18:12.816820   70885 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 22:18:12.954849   70885 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 22:18:13.132674   70885 docker.go:233] disabling docker service ...
	I0318 22:18:13.132745   70885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 22:18:13.151204   70885 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 22:18:13.165507   70885 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 22:18:13.321636   70885 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 22:18:13.458819   70885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 22:18:13.476015   70885 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 22:18:13.498144   70885 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 22:18:13.498193   70885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.510059   70885 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 22:18:13.510111   70885 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.522597   70885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.534049   70885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.545817   70885 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 22:18:13.557967   70885 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.570852   70885 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.591642   70885 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:18:13.603921   70885 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 22:18:13.614982   70885 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 22:18:13.615043   70885 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 22:18:13.630055   70885 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 22:18:13.642843   70885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:18:13.784655   70885 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 22:18:13.944349   70885 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 22:18:13.944450   70885 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 22:18:13.950475   70885 start.go:562] Will wait 60s for crictl version
	I0318 22:18:13.950540   70885 ssh_runner.go:195] Run: which crictl
	I0318 22:18:13.955431   70885 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 22:18:14.003249   70885 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 22:18:14.003356   70885 ssh_runner.go:195] Run: crio --version
	I0318 22:18:14.038057   70885 ssh_runner.go:195] Run: crio --version
	I0318 22:18:14.075404   70885 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 22:18:14.076635   70885 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:18:14.079166   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:14.079553   70885 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:18:03 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:18:14.079575   70885 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:18:14.079793   70885 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 22:18:14.085275   70885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 22:18:14.102860   70885 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0318 22:18:14.104167   70885 kubeadm.go:877] updating cluster {Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 22:18:14.104302   70885 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 22:18:14.104381   70885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:18:14.147954   70885 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 22:18:14.148074   70885 ssh_runner.go:195] Run: which lz4
	I0318 22:18:14.154260   70885 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 22:18:14.159252   70885 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 22:18:14.159283   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0318 22:18:15.966480   70885 crio.go:462] duration metric: took 1.812257597s to copy over tarball
	I0318 22:18:15.966560   70885 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 22:18:18.542382   70885 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.57578839s)
	I0318 22:18:18.542420   70885 crio.go:469] duration metric: took 2.575910716s to extract the tarball
	I0318 22:18:18.542429   70885 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 22:18:18.598257   70885 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:18:18.650953   70885 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 22:18:18.650975   70885 cache_images.go:84] Images are preloaded, skipping loading
	I0318 22:18:18.650982   70885 kubeadm.go:928] updating node { 192.168.61.192 8443 v1.29.0-rc.2 crio true true} ...
	I0318 22:18:18.651087   70885 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-962491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 22:18:18.651157   70885 ssh_runner.go:195] Run: crio config
	I0318 22:18:18.707142   70885 cni.go:84] Creating CNI manager for ""
	I0318 22:18:18.707165   70885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:18:18.707176   70885 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0318 22:18:18.707202   70885 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-962491 NodeName:newest-cni-962491 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 22:18:18.707356   70885 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-962491"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 22:18:18.707414   70885 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 22:18:18.721192   70885 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 22:18:18.721258   70885 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 22:18:18.734723   70885 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0318 22:18:18.756107   70885 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 22:18:18.775046   70885 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0318 22:18:18.795364   70885 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I0318 22:18:18.799956   70885 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 22:18:18.815256   70885 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:18:18.951095   70885 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:18:18.970397   70885 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491 for IP: 192.168.61.192
	I0318 22:18:18.970425   70885 certs.go:194] generating shared ca certs ...
	I0318 22:18:18.970441   70885 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:18.970672   70885 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 22:18:18.970762   70885 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 22:18:18.970778   70885 certs.go:256] generating profile certs ...
	I0318 22:18:18.970857   70885 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.key
	I0318 22:18:18.970878   70885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.crt with IP's: []
	I0318 22:18:19.111095   70885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.crt ...
	I0318 22:18:19.111129   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.crt: {Name:mk1091642fef8b2f36c02dc57b01d7d5d53bc028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:19.111340   70885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.key ...
	I0318 22:18:19.111357   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.key: {Name:mke6ea5881535399376ecb989f2709d86a5c1ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:19.111488   70885 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key.f1d464e4
	I0318 22:18:19.111518   70885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt.f1d464e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.192]
	I0318 22:18:19.226550   70885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt.f1d464e4 ...
	I0318 22:18:19.226595   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt.f1d464e4: {Name:mk3b8ca57f9e28596aa654b8b887dd8aab0469ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:19.226805   70885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key.f1d464e4 ...
	I0318 22:18:19.226830   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key.f1d464e4: {Name:mk3ab1d6b5479f82d4593639be2ba5beda4a525f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:19.226970   70885 certs.go:381] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt.f1d464e4 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt
	I0318 22:18:19.227090   70885 certs.go:385] copying /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key.f1d464e4 -> /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key
	I0318 22:18:19.227174   70885 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.key
	I0318 22:18:19.227203   70885 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.crt with IP's: []
	I0318 22:18:19.286074   70885 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.crt ...
	I0318 22:18:19.286108   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.crt: {Name:mkd9c46b024c7a4d86943ec9f44e5e24bf6c693e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:19.286267   70885 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.key ...
	I0318 22:18:19.286294   70885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.key: {Name:mk9ca43cb75fca6fdc4cea088bd7a8133136ada8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:18:19.286482   70885 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 22:18:19.286519   70885 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 22:18:19.286529   70885 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 22:18:19.286553   70885 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 22:18:19.286576   70885 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 22:18:19.286596   70885 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 22:18:19.286637   70885 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 22:18:19.287286   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 22:18:19.316885   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 22:18:19.348729   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 22:18:19.380956   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 22:18:19.411791   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 22:18:19.442840   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 22:18:19.475674   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 22:18:19.560118   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 22:18:19.594400   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 22:18:19.624694   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 22:18:19.656092   70885 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 22:18:19.686390   70885 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 22:18:19.706511   70885 ssh_runner.go:195] Run: openssl version
	I0318 22:18:19.714082   70885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 22:18:19.728242   70885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 22:18:19.733774   70885 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 22:18:19.733858   70885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 22:18:19.740515   70885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 22:18:19.752887   70885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 22:18:19.765152   70885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 22:18:19.770845   70885 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 22:18:19.770914   70885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 22:18:19.777316   70885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 22:18:19.790641   70885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 22:18:19.807193   70885 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:18:19.818973   70885 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:18:19.819043   70885 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:18:19.827969   70885 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 22:18:19.847469   70885 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 22:18:19.854357   70885 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 22:18:19.854427   70885 kubeadm.go:391] StartCluster: {Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:18:19.854546   70885 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 22:18:19.854617   70885 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 22:18:19.908962   70885 cri.go:89] found id: ""
	I0318 22:18:19.909028   70885 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 22:18:19.922810   70885 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:18:19.935106   70885 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:18:19.946281   70885 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:18:19.946313   70885 kubeadm.go:156] found existing configuration files:
	
	I0318 22:18:19.946422   70885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:18:19.957207   70885 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:18:19.957268   70885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:18:19.968262   70885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:18:19.979210   70885 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:18:19.979280   70885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:18:19.990944   70885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:18:20.002676   70885 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:18:20.002743   70885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:18:20.014396   70885 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:18:20.025791   70885 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:18:20.025860   70885 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:18:20.037986   70885 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:18:20.292819   70885 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:18:31.904105   70885 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 22:18:31.904172   70885 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:18:31.904249   70885 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:18:31.904372   70885 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:18:31.904479   70885 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:18:31.904574   70885 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:18:31.906223   70885 out.go:204]   - Generating certificates and keys ...
	I0318 22:18:31.906308   70885 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:18:31.906381   70885 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:18:31.906460   70885 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 22:18:31.906543   70885 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 22:18:31.906637   70885 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 22:18:31.906715   70885 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 22:18:31.906784   70885 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 22:18:31.906980   70885 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-962491] and IPs [192.168.61.192 127.0.0.1 ::1]
	I0318 22:18:31.907064   70885 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 22:18:31.907211   70885 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-962491] and IPs [192.168.61.192 127.0.0.1 ::1]
	I0318 22:18:31.907320   70885 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 22:18:31.907417   70885 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 22:18:31.907477   70885 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 22:18:31.907557   70885 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:18:31.907629   70885 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:18:31.907678   70885 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 22:18:31.907769   70885 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:18:31.907869   70885 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:18:31.907931   70885 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:18:31.908025   70885 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:18:31.908131   70885 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:18:31.909707   70885 out.go:204]   - Booting up control plane ...
	I0318 22:18:31.909842   70885 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:18:31.909953   70885 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:18:31.910040   70885 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:18:31.910168   70885 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:18:31.910249   70885 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:18:31.910287   70885 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:18:31.910418   70885 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:18:31.910481   70885 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.507469 seconds
	I0318 22:18:31.910567   70885 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:18:31.910667   70885 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:18:31.910717   70885 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:18:31.910863   70885 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-962491 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:18:31.910910   70885 kubeadm.go:309] [bootstrap-token] Using token: oqvtlt.qg5k7i4m2iwyc5nm
	I0318 22:18:31.912433   70885 out.go:204]   - Configuring RBAC rules ...
	I0318 22:18:31.912537   70885 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:18:31.912629   70885 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:18:31.912783   70885 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:18:31.912983   70885 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:18:31.913110   70885 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:18:31.913241   70885 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:18:31.913395   70885 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:18:31.913448   70885 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:18:31.913514   70885 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:18:31.913525   70885 kubeadm.go:309] 
	I0318 22:18:31.913602   70885 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:18:31.913623   70885 kubeadm.go:309] 
	I0318 22:18:31.913729   70885 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:18:31.913743   70885 kubeadm.go:309] 
	I0318 22:18:31.913783   70885 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:18:31.913845   70885 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:18:31.913922   70885 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:18:31.913934   70885 kubeadm.go:309] 
	I0318 22:18:31.914008   70885 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:18:31.914039   70885 kubeadm.go:309] 
	I0318 22:18:31.914099   70885 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:18:31.914108   70885 kubeadm.go:309] 
	I0318 22:18:31.914184   70885 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:18:31.914259   70885 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:18:31.914331   70885 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:18:31.914339   70885 kubeadm.go:309] 
	I0318 22:18:31.914415   70885 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:18:31.914513   70885 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:18:31.914524   70885 kubeadm.go:309] 
	I0318 22:18:31.914627   70885 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oqvtlt.qg5k7i4m2iwyc5nm \
	I0318 22:18:31.914793   70885 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:18:31.914840   70885 kubeadm.go:309] 	--control-plane 
	I0318 22:18:31.914850   70885 kubeadm.go:309] 
	I0318 22:18:31.914967   70885 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:18:31.914978   70885 kubeadm.go:309] 
	I0318 22:18:31.915094   70885 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oqvtlt.qg5k7i4m2iwyc5nm \
	I0318 22:18:31.915238   70885 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:18:31.915276   70885 cni.go:84] Creating CNI manager for ""
	I0318 22:18:31.915288   70885 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:18:31.916746   70885 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:18:31.918221   70885 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:18:31.991888   70885 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:18:32.110172   70885 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:18:32.110260   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:32.110265   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-962491 minikube.k8s.io/updated_at=2024_03_18T22_18_32_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=newest-cni-962491 minikube.k8s.io/primary=true
	I0318 22:18:32.167546   70885 ops.go:34] apiserver oom_adj: -16
	I0318 22:18:32.392880   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:32.893024   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:33.393862   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:33.893028   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:34.393453   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:34.893059   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:35.393648   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:35.893028   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:36.393062   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:18:36.893041   70885 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.401406889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a478e622-9370-41e6-b3ca-1911e9889743 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.403669059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94938f61-db2b-459d-ba8c-dcffb5aeee43 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.404044248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800319404012862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94938f61-db2b-459d-ba8c-dcffb5aeee43 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.404815247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2190aed4-6292-40d2-b312-2af484073248 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.404868820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2190aed4-6292-40d2-b312-2af484073248 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.405070186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2190aed4-6292-40d2-b312-2af484073248 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.415473681Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=740a546b-ed36-4462-9bdc-859724a17207 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.415686464Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-6mtzp,Uid:b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799183505145217,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T21:59:35.597349683Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&PodSandboxMetadata{Name:busybox,Uid:3f9c8026-8490-4959-a7d6-fc5d82c4af3b,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1710799183489867578,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T21:59:35.597344886Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:df32251f61b9d83f4898b03bf3ca234a5b13cffd05889ece9cc7a27db8017af0,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-rdthh,Uid:50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799181700493215,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-rdthh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T21:59:35.5
97358222Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d7579bb6-4512-4a79-adf6-40745192d451,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799175921854546,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T21:59:35.597337114Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&PodSandboxMetadata{Name:kube-proxy-kkrzx,Uid:7e568f4e-de96-4981-a397-cdf1a578c5b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799175918886112,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a397-cdf1a578c5b6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-03-18T21:59:35.597353235Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-963041,Uid:099dd0c63e5e8d8dc7f021facb5b866e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799171138063420,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 099dd0c63e5e8d8dc7f021facb5b866e,kubernetes.io/config.seen: 2024-03-18T21:59:30.595483858Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-963041,Uid:8cdf4749647f28b7eb9ac73fd0e68783,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799171127188479,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.84:2379,kubernetes.io/config.hash: 8cdf4749647f28b7eb9ac73fd0e68783,kubernetes.io/config.seen: 2024-03-18T21:59:30.664937379Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-963041,Uid:0fad90d2648d7ad952ad560af5b502ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799171122980136,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.84:8443,kubernetes.io/config.hash: 0fad90d2648d7ad952ad560af5b502ec,kubernetes.io/config.seen: 2024-03-18T21:59:30.595485220Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-963041,Uid:7e6c7c86d5cd656079c444a6a4bd8489,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710799171113553292,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7e6c7c86d5cd656079c444a6a4bd8489,kube
rnetes.io/config.seen: 2024-03-18T21:59:30.595478135Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=740a546b-ed36-4462-9bdc-859724a17207 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.416705971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58594c3b-c747-478d-8406-2f43a5b0d353 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.416755086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58594c3b-c747-478d-8406-2f43a5b0d353 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.416941022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58594c3b-c747-478d-8406-2f43a5b0d353 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.457075678Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61322b65-1519-4928-a178-c55deda69cd1 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.457203836Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61322b65-1519-4928-a178-c55deda69cd1 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.461102736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3efbf9ad-aa68-492c-bec2-0254a3e2e418 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.462189212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800319462107241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3efbf9ad-aa68-492c-bec2-0254a3e2e418 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.463439833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf9f5a7a-ef3f-48b9-b6be-ec5017733204 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.463511826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf9f5a7a-ef3f-48b9-b6be-ec5017733204 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.464029352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf9f5a7a-ef3f-48b9-b6be-ec5017733204 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.500928876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcf9ca99-1520-4953-81fd-b66d3292ae04 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.501032092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcf9ca99-1520-4953-81fd-b66d3292ae04 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.502194077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=977bdfa7-8324-40c1-a3e6-e0d3d1edfe1c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.502651550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800319502620044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=977bdfa7-8324-40c1-a3e6-e0d3d1edfe1c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.503208897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16a5e3be-d7df-45fc-a283-67f14ee7a2a4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.503374267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16a5e3be-d7df-45fc-a283-67f14ee7a2a4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:18:39 no-preload-963041 crio[697]: time="2024-03-18 22:18:39.503588233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799207958028017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d451,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:314b6b17b16f2cdb77890e538b13ff84b0215fd67ca536c79a850c3cd6e34fed,PodSandboxId:af474d8de5559fb4cb9996fc19cc4d3aa0aca34e41aa7551c8a6767d7574bbbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710799186843369189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f9c8026-8490-4959-a7d6-fc5d82c4af3b,},Annotations:map[string]string{io.kubernetes.container.hash: 9397d2db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540,PodSandboxId:345b2c562f629ed8e2e2e30c19d2c1aab796c38eda98f82eff60a4c3c0c2a54c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710799183779707432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-6mtzp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5c2b5e8-23c6-493b-97cd-861ca5c9d28a,},Annotations:map[string]string{io.kubernetes.container.hash: fe1be4f4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5,PodSandboxId:127a6274170c06cfad61e9432948d3b8360822ffd8dc8e622e86950c376f1f00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710799176840495340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkrzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e568f4e-de96-4981-a3
97-cdf1a578c5b6,},Annotations:map[string]string{io.kubernetes.container.hash: 31514310,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968,PodSandboxId:956268acc0e56ab76c153e0da4c2db082b24a3deeb2a42e9aa95af6792b55fa7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710799176823731338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7579bb6-4512-4a79-adf6-40745192d4
51,},Annotations:map[string]string{io.kubernetes.container.hash: 2682487b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5,PodSandboxId:cc3be690b0316e919a84e10561ff129d574fffce17fdeb96e58236555dbfe92b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710799171568912986,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 099dd0c63e5e8d8dc7f021facb5b866e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4,PodSandboxId:c3268ad6d80ba8930cfa2345f2e7f5ccb1d277e2a01d1e88017ef32f981164cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710799171538705869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cdf4749647f28b7eb9ac73fd0e68783,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bc826559,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84,PodSandboxId:33d230c469ced032e5ec4e63506257e0f9780924558e94756c877dcb701a1fba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710799171453125900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e6c7c86d5cd656079c444a6a4bd8489,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce,PodSandboxId:59ae21676f5406b0494db297b9755c857d4788f72f77066b2f31000223942216,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710799171348526233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-963041,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fad90d2648d7ad952ad560af5b502ec,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1f2e3c34,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16a5e3be-d7df-45fc-a283-67f14ee7a2a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9559a9b3fa160       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   956268acc0e56       storage-provisioner
	314b6b17b16f2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   af474d8de5559       busybox
	95d95025af787       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   345b2c562f629       coredns-76f75df574-6mtzp
	757a8fc5ae06d       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      19 minutes ago      Running             kube-proxy                1                   127a6274170c0       kube-proxy-kkrzx
	761bc0d14f31e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   956268acc0e56       storage-provisioner
	4896452ff8ddb       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      19 minutes ago      Running             kube-scheduler            1                   cc3be690b0316       kube-scheduler-no-preload-963041
	d27b0e98d5f67       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      19 minutes ago      Running             etcd                      1                   c3268ad6d80ba       etcd-no-preload-963041
	6b309d737fd2f       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      19 minutes ago      Running             kube-controller-manager   1                   33d230c469ced       kube-controller-manager-no-preload-963041
	d723ad24bd61e       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      19 minutes ago      Running             kube-apiserver            1                   59ae21676f540       kube-apiserver-no-preload-963041
	
	
	==> coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33148 - 53974 "HINFO IN 4416264748007856954.6098944003411770047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014649442s
	
	
	==> describe nodes <==
	Name:               no-preload-963041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-963041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=no-preload-963041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T21_50_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 21:50:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-963041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:15:23 +0000   Mon, 18 Mar 2024 21:50:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:15:23 +0000   Mon, 18 Mar 2024 21:50:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:15:23 +0000   Mon, 18 Mar 2024 21:50:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:15:23 +0000   Mon, 18 Mar 2024 21:59:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.84
	  Hostname:    no-preload-963041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b568cf6d942140899f719c07fa284928
	  System UUID:                b568cf6d-9421-4089-9f71-9c07fa284928
	  Boot ID:                    2c801869-f97e-42b2-8386-4a51a6feb5cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 coredns-76f75df574-6mtzp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-no-preload-963041                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kube-apiserver-no-preload-963041             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-no-preload-963041    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-kkrzx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-no-preload-963041             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 metrics-server-57f55c9bc5-rdthh              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node no-preload-963041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     27m                kubelet          Node no-preload-963041 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node no-preload-963041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node no-preload-963041 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeReady                27m                kubelet          Node no-preload-963041 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node no-preload-963041 event: Registered Node no-preload-963041 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-963041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-963041 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-963041 event: Registered Node no-preload-963041 in Controller
	
	
	==> dmesg <==
	[Mar18 21:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052884] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044087] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.919726] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Mar18 21:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.715162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.117415] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.063638] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068187] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.210520] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.143900] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.351645] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[ +17.979238] systemd-fstab-generator[1193]: Ignoring "noauto" option for root device
	[  +0.060092] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.362133] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +5.656136] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.916325] systemd-fstab-generator[1932]: Ignoring "noauto" option for root device
	[  +1.768167] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.171941] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] <==
	{"level":"info","ts":"2024-03-18T21:59:38.032491Z","caller":"traceutil/trace.go:171","msg":"trace[1954620342] linearizableReadLoop","detail":"{readStateIndex:570; appliedIndex:568; }","duration":"376.518475ms","start":"2024-03-18T21:59:37.65595Z","end":"2024-03-18T21:59:38.032468Z","steps":["trace[1954620342] 'read index received'  (duration: 292.529813ms)","trace[1954620342] 'applied index is now lower than readState.Index'  (duration: 83.98787ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T21:59:38.032511Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T21:59:37.519865Z","time spent":"512.64214ms","remote":"127.0.0.1:54636","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-03-18T21:59:38.032825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.8914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41054"}
	{"level":"info","ts":"2024-03-18T21:59:38.032859Z","caller":"traceutil/trace.go:171","msg":"trace[1455292273] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:539; }","duration":"376.933973ms","start":"2024-03-18T21:59:37.655913Z","end":"2024-03-18T21:59:38.032847Z","steps":["trace[1455292273] 'agreement among raft nodes before linearized reading'  (duration: 376.695368ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.032881Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T21:59:37.655896Z","time spent":"376.980085ms","remote":"127.0.0.1:54768","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":8,"response size":41078,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-03-18T21:59:38.034023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.945841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-963041\" ","response":"range_response_count:1 size:4604"}
	{"level":"info","ts":"2024-03-18T21:59:38.034088Z","caller":"traceutil/trace.go:171","msg":"trace[1350820805] range","detail":"{range_begin:/registry/minions/no-preload-963041; range_end:; response_count:1; response_revision:539; }","duration":"126.006905ms","start":"2024-03-18T21:59:37.908065Z","end":"2024-03-18T21:59:38.034072Z","steps":["trace[1350820805] 'agreement among raft nodes before linearized reading'  (duration: 125.916989ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.034345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.935211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-03-18T21:59:38.034397Z","caller":"traceutil/trace.go:171","msg":"trace[1815315574] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:539; }","duration":"187.987435ms","start":"2024-03-18T21:59:37.8464Z","end":"2024-03-18T21:59:38.034388Z","steps":["trace[1815315574] 'agreement among raft nodes before linearized reading'  (duration: 186.388303ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T21:59:38.21866Z","caller":"traceutil/trace.go:171","msg":"trace[964714991] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"182.039862ms","start":"2024-03-18T21:59:38.036603Z","end":"2024-03-18T21:59:38.218643Z","steps":["trace[964714991] 'process raft request'  (duration: 126.682506ms)","trace[964714991] 'compare'  (duration: 54.862272ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T21:59:38.218779Z","caller":"traceutil/trace.go:171","msg":"trace[418374600] linearizableReadLoop","detail":"{readStateIndex:571; appliedIndex:570; }","duration":"179.394561ms","start":"2024-03-18T21:59:38.039376Z","end":"2024-03-18T21:59:38.218771Z","steps":["trace[418374600] 'read index received'  (duration: 123.917935ms)","trace[418374600] 'applied index is now lower than readState.Index'  (duration: 55.475162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T21:59:38.21888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.505213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:480"}
	{"level":"info","ts":"2024-03-18T21:59:38.21953Z","caller":"traceutil/trace.go:171","msg":"trace[829889812] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:540; }","duration":"180.148682ms","start":"2024-03-18T21:59:38.039357Z","end":"2024-03-18T21:59:38.219505Z","steps":["trace[829889812] 'agreement among raft nodes before linearized reading'  (duration: 179.436188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.219837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.356808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2024-03-18T21:59:38.219956Z","caller":"traceutil/trace.go:171","msg":"trace[325180291] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:540; }","duration":"180.472128ms","start":"2024-03-18T21:59:38.039464Z","end":"2024-03-18T21:59:38.219937Z","steps":["trace[325180291] 'agreement among raft nodes before linearized reading'  (duration: 180.294046ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T21:59:38.220202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.444375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4604"}
	{"level":"info","ts":"2024-03-18T21:59:38.220375Z","caller":"traceutil/trace.go:171","msg":"trace[1760800734] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:540; }","duration":"171.618463ms","start":"2024-03-18T21:59:38.048749Z","end":"2024-03-18T21:59:38.220368Z","steps":["trace[1760800734] 'agreement among raft nodes before linearized reading'  (duration: 171.42094ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:09:33.146526Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":881}
	{"level":"info","ts":"2024-03-18T22:09:33.160914Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":881,"took":"13.250651ms","hash":2364969846}
	{"level":"info","ts":"2024-03-18T22:09:33.161016Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2364969846,"revision":881,"compact-revision":-1}
	{"level":"info","ts":"2024-03-18T22:14:33.156454Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1123}
	{"level":"info","ts":"2024-03-18T22:14:33.15819Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1123,"took":"1.117785ms","hash":2882111714}
	{"level":"info","ts":"2024-03-18T22:14:33.15836Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2882111714,"revision":1123,"compact-revision":881}
	{"level":"info","ts":"2024-03-18T22:18:19.675724Z","caller":"traceutil/trace.go:171","msg":"trace[2032089621] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"355.563224ms","start":"2024-03-18T22:18:19.320066Z","end":"2024-03-18T22:18:19.675629Z","steps":["trace[2032089621] 'process raft request'  (duration: 355.402454ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T22:18:19.676207Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T22:18:19.320045Z","time spent":"355.899714ms","remote":"127.0.0.1:54750","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1549 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 22:18:39 up 19 min,  0 users,  load average: 0.14, 0.28, 0.23
	Linux no-preload-963041 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] <==
	I0318 22:12:35.821929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:14:34.823747       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:14:34.824060       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 22:14:35.824989       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 22:14:35.825058       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:14:35.825084       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:14:35.825200       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 22:14:35.825173       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:14:35.826463       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:15:35.826057       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:15:35.826443       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:15:35.826485       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:15:35.826550       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:15:35.826594       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:15:35.828095       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:17:35.827678       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:17:35.827958       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:17:35.827987       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:17:35.829159       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:17:35.829395       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:17:35.829442       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] <==
	I0318 22:12:49.473339       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:13:18.971060       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:13:19.483493       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:13:48.978996       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:13:49.496647       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:14:18.985152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:14:19.505060       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:14:48.993336       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:14:49.513865       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:15:19.001948       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:15:19.522153       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:15:49.008333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:15:49.530506       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 22:16:00.703676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="253.929µs"
	I0318 22:16:12.709767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="165.249µs"
	E0318 22:16:19.013751       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:16:19.539391       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:16:49.019751       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:16:49.549415       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:17:19.025208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:17:19.559436       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:17:49.031050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:17:49.568302       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:18:19.037548       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:18:19.577987       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] <==
	I0318 21:59:37.901165       1 server_others.go:72] "Using iptables proxy"
	I0318 21:59:38.042943       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.84"]
	I0318 21:59:38.088295       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 21:59:38.088390       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 21:59:38.088417       1 server_others.go:168] "Using iptables Proxier"
	I0318 21:59:38.092102       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 21:59:38.092382       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 21:59:38.092429       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:59:38.093422       1 config.go:188] "Starting service config controller"
	I0318 21:59:38.093487       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 21:59:38.093521       1 config.go:97] "Starting endpoint slice config controller"
	I0318 21:59:38.093538       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 21:59:38.093986       1 config.go:315] "Starting node config controller"
	I0318 21:59:38.094984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 21:59:38.194111       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 21:59:38.194168       1 shared_informer.go:318] Caches are synced for service config
	I0318 21:59:38.195597       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] <==
	I0318 21:59:32.769772       1 serving.go:380] Generated self-signed cert in-memory
	W0318 21:59:34.704722       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 21:59:34.704844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 21:59:34.705019       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 21:59:34.705179       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 21:59:34.825829       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0318 21:59:34.825884       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 21:59:34.832922       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 21:59:34.833121       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 21:59:34.833138       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 21:59:34.833159       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 21:59:34.934139       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:16:30 no-preload-963041 kubelet[1323]: E0318 22:16:30.715755    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:16:30 no-preload-963041 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:16:30 no-preload-963041 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:16:30 no-preload-963041 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:16:30 no-preload-963041 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:16:38 no-preload-963041 kubelet[1323]: E0318 22:16:38.684411    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:16:51 no-preload-963041 kubelet[1323]: E0318 22:16:51.684827    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:17:06 no-preload-963041 kubelet[1323]: E0318 22:17:06.687501    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:17:18 no-preload-963041 kubelet[1323]: E0318 22:17:18.686801    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:17:29 no-preload-963041 kubelet[1323]: E0318 22:17:29.685284    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:17:30 no-preload-963041 kubelet[1323]: E0318 22:17:30.717342    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:17:30 no-preload-963041 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:17:30 no-preload-963041 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:17:30 no-preload-963041 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:17:30 no-preload-963041 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:17:43 no-preload-963041 kubelet[1323]: E0318 22:17:43.684499    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:17:56 no-preload-963041 kubelet[1323]: E0318 22:17:56.685671    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:18:07 no-preload-963041 kubelet[1323]: E0318 22:18:07.684652    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:18:22 no-preload-963041 kubelet[1323]: E0318 22:18:22.686405    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	Mar 18 22:18:30 no-preload-963041 kubelet[1323]: E0318 22:18:30.722356    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:18:30 no-preload-963041 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:18:30 no-preload-963041 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:18:30 no-preload-963041 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:18:30 no-preload-963041 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:18:34 no-preload-963041 kubelet[1323]: E0318 22:18:34.687384    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rdthh" podUID="50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2"
	
	
	==> storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] <==
	I0318 21:59:37.331523       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0318 22:00:07.336457       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] <==
	I0318 22:00:08.076679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:00:08.084989       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:00:08.085290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:00:25.492806       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:00:25.492954       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-963041_0a21f7a5-74e1-4a26-bb19-9b7a82763866!
	I0318 22:00:25.493686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"505589fa-92f7-4d66-9fcc-93d0329ea57e", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-963041_0a21f7a5-74e1-4a26-bb19-9b7a82763866 became leader
	I0318 22:00:25.593905       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-963041_0a21f7a5-74e1-4a26-bb19-9b7a82763866!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963041 -n no-preload-963041
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-963041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rdthh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-963041 describe pod metrics-server-57f55c9bc5-rdthh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-963041 describe pod metrics-server-57f55c9bc5-rdthh: exit status 1 (64.546117ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rdthh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-963041 describe pod metrics-server-57f55c9bc5-rdthh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (332.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (347.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 22:19:43.803022282 +0000 UTC m=+6640.704810722
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-660775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.543µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-660775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-660775 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-660775 logs -n 25: (1.303350732s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 22:17 UTC | 18 Mar 24 22:17 UTC |
	| start   | -p newest-cni-962491 --memory=2200 --alsologtostderr   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:17 UTC | 18 Mar 24 22:18 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	| addons  | enable metrics-server -p newest-cni-962491             | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-962491                                   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-962491                  | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-962491 --memory=2200 --alsologtostderr   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:18 UTC | 18 Mar 24 22:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 22:19 UTC | 18 Mar 24 22:19 UTC |
	| image   | newest-cni-962491 image list                           | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:19 UTC | 18 Mar 24 22:19 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-962491                                   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:19 UTC | 18 Mar 24 22:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-962491                                   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:19 UTC | 18 Mar 24 22:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-962491                                   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:19 UTC | 18 Mar 24 22:19 UTC |
	| delete  | -p newest-cni-962491                                   | newest-cni-962491            | jenkins | v1.32.0 | 18 Mar 24 22:19 UTC | 18 Mar 24 22:19 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 22:18:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 22:18:59.245638   71620 out.go:291] Setting OutFile to fd 1 ...
	I0318 22:18:59.245751   71620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:18:59.245760   71620 out.go:304] Setting ErrFile to fd 2...
	I0318 22:18:59.245764   71620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 22:18:59.245952   71620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 22:18:59.246519   71620 out.go:298] Setting JSON to false
	I0318 22:18:59.247486   71620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7283,"bootTime":1710793056,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 22:18:59.247541   71620 start.go:139] virtualization: kvm guest
	I0318 22:18:59.249985   71620 out.go:177] * [newest-cni-962491] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 22:18:59.251333   71620 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 22:18:59.252673   71620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 22:18:59.251400   71620 notify.go:220] Checking for updates...
	I0318 22:18:59.254071   71620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:18:59.255238   71620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 22:18:59.256612   71620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 22:18:59.257979   71620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 22:18:59.259741   71620 config.go:182] Loaded profile config "newest-cni-962491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 22:18:59.260269   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:18:59.260317   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:18:59.275900   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0318 22:18:59.276381   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:18:59.276980   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:18:59.277023   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:18:59.277397   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:18:59.277574   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:59.277804   71620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 22:18:59.278068   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:18:59.278101   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:18:59.295102   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0318 22:18:59.295615   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:18:59.296044   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:18:59.296068   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:18:59.296385   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:18:59.296562   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:59.331391   71620 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 22:18:59.332703   71620 start.go:297] selected driver: kvm2
	I0318 22:18:59.332716   71620 start.go:901] validating driver "kvm2" against &{Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:18:59.332839   71620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 22:18:59.333554   71620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:18:59.333622   71620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 22:18:59.347853   71620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 22:18:59.348246   71620 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 22:18:59.348318   71620 cni.go:84] Creating CNI manager for ""
	I0318 22:18:59.348335   71620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:18:59.348380   71620 start.go:340] cluster config:
	{Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:18:59.348500   71620 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 22:18:59.350337   71620 out.go:177] * Starting "newest-cni-962491" primary control-plane node in "newest-cni-962491" cluster
	I0318 22:18:59.351708   71620 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 22:18:59.351752   71620 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 22:18:59.351764   71620 cache.go:56] Caching tarball of preloaded images
	I0318 22:18:59.351854   71620 preload.go:173] Found /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 22:18:59.351869   71620 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 22:18:59.352012   71620 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/config.json ...
	I0318 22:18:59.352218   71620 start.go:360] acquireMachinesLock for newest-cni-962491: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 22:18:59.352262   71620 start.go:364] duration metric: took 25.466µs to acquireMachinesLock for "newest-cni-962491"
	I0318 22:18:59.352275   71620 start.go:96] Skipping create...Using existing machine configuration
	I0318 22:18:59.352282   71620 fix.go:54] fixHost starting: 
	I0318 22:18:59.352614   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:18:59.352656   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:18:59.366964   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0318 22:18:59.367454   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:18:59.367969   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:18:59.367990   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:18:59.368314   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:18:59.368494   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:18:59.368643   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:18:59.370149   71620 fix.go:112] recreateIfNeeded on newest-cni-962491: state=Stopped err=<nil>
	I0318 22:18:59.370187   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	W0318 22:18:59.370343   71620 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 22:18:59.372502   71620 out.go:177] * Restarting existing kvm2 VM for "newest-cni-962491" ...
	I0318 22:18:59.374033   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Start
	I0318 22:18:59.374231   71620 main.go:141] libmachine: (newest-cni-962491) Ensuring networks are active...
	I0318 22:18:59.375158   71620 main.go:141] libmachine: (newest-cni-962491) Ensuring network default is active
	I0318 22:18:59.375471   71620 main.go:141] libmachine: (newest-cni-962491) Ensuring network mk-newest-cni-962491 is active
	I0318 22:18:59.375851   71620 main.go:141] libmachine: (newest-cni-962491) Getting domain xml...
	I0318 22:18:59.376575   71620 main.go:141] libmachine: (newest-cni-962491) Creating domain...
	I0318 22:19:00.662731   71620 main.go:141] libmachine: (newest-cni-962491) Waiting to get IP...
	I0318 22:19:00.663749   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:00.664211   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:00.664295   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:00.664181   71672 retry.go:31] will retry after 277.497176ms: waiting for machine to come up
	I0318 22:19:00.943739   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:00.944286   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:00.944316   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:00.944240   71672 retry.go:31] will retry after 351.930121ms: waiting for machine to come up
	I0318 22:19:01.297793   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:01.298204   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:01.298234   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:01.298189   71672 retry.go:31] will retry after 468.343158ms: waiting for machine to come up
	I0318 22:19:01.768957   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:01.769506   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:01.769529   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:01.769457   71672 retry.go:31] will retry after 535.175838ms: waiting for machine to come up
	I0318 22:19:02.417630   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:02.418073   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:02.418107   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:02.418052   71672 retry.go:31] will retry after 723.646539ms: waiting for machine to come up
	I0318 22:19:03.142934   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:03.143471   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:03.143497   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:03.143420   71672 retry.go:31] will retry after 629.020988ms: waiting for machine to come up
	I0318 22:19:03.774259   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:03.774727   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:03.774756   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:03.774672   71672 retry.go:31] will retry after 776.565206ms: waiting for machine to come up
	I0318 22:19:04.553041   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:04.553604   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:04.553630   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:04.553554   71672 retry.go:31] will retry after 1.363130116s: waiting for machine to come up
	I0318 22:19:05.918327   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:05.918809   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:05.918854   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:05.918767   71672 retry.go:31] will retry after 1.459066805s: waiting for machine to come up
	I0318 22:19:07.379597   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:07.380100   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:07.380129   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:07.380040   71672 retry.go:31] will retry after 1.557348869s: waiting for machine to come up
	I0318 22:19:08.939740   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:08.940247   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:08.940283   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:08.940202   71672 retry.go:31] will retry after 1.961489039s: waiting for machine to come up
	I0318 22:19:10.902974   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:10.903400   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:10.903430   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:10.903354   71672 retry.go:31] will retry after 2.446407235s: waiting for machine to come up
	I0318 22:19:13.352979   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:13.353496   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:13.353522   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:13.353464   71672 retry.go:31] will retry after 3.189930808s: waiting for machine to come up
	I0318 22:19:16.546101   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:16.546555   71620 main.go:141] libmachine: (newest-cni-962491) DBG | unable to find current IP address of domain newest-cni-962491 in network mk-newest-cni-962491
	I0318 22:19:16.546575   71620 main.go:141] libmachine: (newest-cni-962491) DBG | I0318 22:19:16.546516   71672 retry.go:31] will retry after 3.758977107s: waiting for machine to come up
	I0318 22:19:20.307009   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.307621   71620 main.go:141] libmachine: (newest-cni-962491) Found IP for machine: 192.168.61.192
	I0318 22:19:20.307646   71620 main.go:141] libmachine: (newest-cni-962491) Reserving static IP address...
	I0318 22:19:20.307660   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has current primary IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.308051   71620 main.go:141] libmachine: (newest-cni-962491) Reserved static IP address: 192.168.61.192
	I0318 22:19:20.308084   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "newest-cni-962491", mac: "52:54:00:0a:88:16", ip: "192.168.61.192"} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.308117   71620 main.go:141] libmachine: (newest-cni-962491) Waiting for SSH to be available...
	I0318 22:19:20.308139   71620 main.go:141] libmachine: (newest-cni-962491) DBG | skip adding static IP to network mk-newest-cni-962491 - found existing host DHCP lease matching {name: "newest-cni-962491", mac: "52:54:00:0a:88:16", ip: "192.168.61.192"}
	I0318 22:19:20.308148   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Getting to WaitForSSH function...
	I0318 22:19:20.310320   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.310671   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.310692   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.310808   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Using SSH client type: external
	I0318 22:19:20.310831   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa (-rw-------)
	I0318 22:19:20.310874   71620 main.go:141] libmachine: (newest-cni-962491) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 22:19:20.310886   71620 main.go:141] libmachine: (newest-cni-962491) DBG | About to run SSH command:
	I0318 22:19:20.310895   71620 main.go:141] libmachine: (newest-cni-962491) DBG | exit 0
	I0318 22:19:20.437399   71620 main.go:141] libmachine: (newest-cni-962491) DBG | SSH cmd err, output: <nil>: 
	I0318 22:19:20.437755   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetConfigRaw
	I0318 22:19:20.438402   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:19:20.441064   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.441424   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.441455   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.441710   71620 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/config.json ...
	I0318 22:19:20.441920   71620 machine.go:94] provisionDockerMachine start ...
	I0318 22:19:20.441939   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:20.442143   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:20.444090   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.444384   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.444413   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.444543   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:20.444722   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.444887   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.445034   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:20.445183   71620 main.go:141] libmachine: Using SSH client type: native
	I0318 22:19:20.445402   71620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:19:20.445415   71620 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 22:19:20.549776   71620 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 22:19:20.549803   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:19:20.550035   71620 buildroot.go:166] provisioning hostname "newest-cni-962491"
	I0318 22:19:20.550065   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:19:20.550194   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:20.552598   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.553013   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.553044   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.553137   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:20.553323   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.553464   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.553588   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:20.553749   71620 main.go:141] libmachine: Using SSH client type: native
	I0318 22:19:20.553909   71620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:19:20.553924   71620 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-962491 && echo "newest-cni-962491" | sudo tee /etc/hostname
	I0318 22:19:20.672512   71620 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-962491
	
	I0318 22:19:20.672549   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:20.675421   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.675796   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.675831   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.675947   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:20.676225   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.676388   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.676572   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:20.676744   71620 main.go:141] libmachine: Using SSH client type: native
	I0318 22:19:20.676987   71620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:19:20.677013   71620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-962491' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-962491/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-962491' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 22:19:20.786511   71620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 22:19:20.786539   71620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 22:19:20.786564   71620 buildroot.go:174] setting up certificates
	I0318 22:19:20.786575   71620 provision.go:84] configureAuth start
	I0318 22:19:20.786583   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetMachineName
	I0318 22:19:20.786890   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:19:20.789182   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.789516   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.789545   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.789675   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:20.791800   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.792095   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.792131   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.792270   71620 provision.go:143] copyHostCerts
	I0318 22:19:20.792329   71620 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 22:19:20.792339   71620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 22:19:20.792402   71620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 22:19:20.792478   71620 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 22:19:20.792492   71620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 22:19:20.792517   71620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 22:19:20.792565   71620 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 22:19:20.792573   71620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 22:19:20.792596   71620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 22:19:20.792637   71620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.newest-cni-962491 san=[127.0.0.1 192.168.61.192 localhost minikube newest-cni-962491]
	I0318 22:19:20.903878   71620 provision.go:177] copyRemoteCerts
	I0318 22:19:20.903928   71620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 22:19:20.903954   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:20.906701   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.907083   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:20.907123   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:20.907316   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:20.907500   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:20.907652   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:20.907793   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:20.987614   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 22:19:21.015189   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 22:19:21.043788   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 22:19:21.070345   71620 provision.go:87] duration metric: took 283.761276ms to configureAuth
	I0318 22:19:21.070375   71620 buildroot.go:189] setting minikube options for container-runtime
	I0318 22:19:21.070622   71620 config.go:182] Loaded profile config "newest-cni-962491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 22:19:21.070687   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:21.073641   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.074033   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:21.074062   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.074264   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:21.074438   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.074566   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.074692   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:21.074856   71620 main.go:141] libmachine: Using SSH client type: native
	I0318 22:19:21.075064   71620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:19:21.075085   71620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 22:19:21.371789   71620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 22:19:21.371840   71620 machine.go:97] duration metric: took 929.904919ms to provisionDockerMachine
	I0318 22:19:21.371858   71620 start.go:293] postStartSetup for "newest-cni-962491" (driver="kvm2")
	I0318 22:19:21.371875   71620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 22:19:21.371919   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:21.372271   71620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 22:19:21.372301   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:21.374947   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.375305   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:21.375348   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.375456   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:21.375673   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.375824   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:21.376015   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:21.458499   71620 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 22:19:21.463517   71620 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 22:19:21.463545   71620 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 22:19:21.463618   71620 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 22:19:21.463694   71620 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 22:19:21.463780   71620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 22:19:21.475221   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 22:19:21.501922   71620 start.go:296] duration metric: took 130.051158ms for postStartSetup
	I0318 22:19:21.501954   71620 fix.go:56] duration metric: took 22.149671807s for fixHost
	I0318 22:19:21.501974   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:21.504548   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.504922   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:21.504948   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.505080   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:21.505286   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.505473   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.505650   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:21.505813   71620 main.go:141] libmachine: Using SSH client type: native
	I0318 22:19:21.506020   71620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.192 22 <nil> <nil>}
	I0318 22:19:21.506035   71620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 22:19:21.609855   71620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710800361.582883918
	
	I0318 22:19:21.609883   71620 fix.go:216] guest clock: 1710800361.582883918
	I0318 22:19:21.609891   71620 fix.go:229] Guest: 2024-03-18 22:19:21.582883918 +0000 UTC Remote: 2024-03-18 22:19:21.501958575 +0000 UTC m=+22.310734006 (delta=80.925343ms)
	I0318 22:19:21.609908   71620 fix.go:200] guest clock delta is within tolerance: 80.925343ms
	I0318 22:19:21.609918   71620 start.go:83] releasing machines lock for "newest-cni-962491", held for 22.257647904s
	I0318 22:19:21.609939   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:21.610155   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:19:21.612674   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.613047   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:21.613074   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.613186   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:21.613620   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:21.613795   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:21.613883   71620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 22:19:21.613926   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:21.614020   71620 ssh_runner.go:195] Run: cat /version.json
	I0318 22:19:21.614041   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:21.616304   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.616705   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:21.616735   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.616764   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.616926   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:21.617098   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.617177   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:21.617199   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:21.617266   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:21.617383   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:21.617478   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:21.617547   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:21.617684   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:21.617832   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:21.715227   71620 ssh_runner.go:195] Run: systemctl --version
	I0318 22:19:21.721773   71620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 22:19:21.868324   71620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 22:19:21.875059   71620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 22:19:21.875132   71620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 22:19:21.895007   71620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 22:19:21.895030   71620 start.go:494] detecting cgroup driver to use...
	I0318 22:19:21.895107   71620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 22:19:21.912207   71620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 22:19:21.928967   71620 docker.go:217] disabling cri-docker service (if available) ...
	I0318 22:19:21.929029   71620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 22:19:21.945074   71620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 22:19:21.960772   71620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 22:19:22.093602   71620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 22:19:22.264645   71620 docker.go:233] disabling docker service ...
	I0318 22:19:22.264699   71620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 22:19:22.280611   71620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 22:19:22.295036   71620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 22:19:22.423852   71620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 22:19:22.557200   71620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 22:19:22.572966   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 22:19:22.593468   71620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 22:19:22.593529   71620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.605515   71620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 22:19:22.605573   71620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.617520   71620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.629596   71620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.641473   71620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 22:19:22.653707   71620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.665531   71620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.684046   71620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 22:19:22.695766   71620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 22:19:22.706319   71620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 22:19:22.706377   71620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 22:19:22.720063   71620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 22:19:22.730852   71620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:19:22.852532   71620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 22:19:23.011110   71620 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 22:19:23.011191   71620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 22:19:23.017465   71620 start.go:562] Will wait 60s for crictl version
	I0318 22:19:23.017538   71620 ssh_runner.go:195] Run: which crictl
	I0318 22:19:23.022035   71620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 22:19:23.064238   71620 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 22:19:23.064303   71620 ssh_runner.go:195] Run: crio --version
	I0318 22:19:23.097796   71620 ssh_runner.go:195] Run: crio --version
	I0318 22:19:23.130981   71620 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 22:19:23.132430   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetIP
	I0318 22:19:23.135115   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:23.135497   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:23.135517   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:23.135768   71620 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 22:19:23.140422   71620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 22:19:23.157027   71620 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0318 22:19:23.158382   71620 kubeadm.go:877] updating cluster {Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 22:19:23.158502   71620 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 22:19:23.158567   71620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:19:23.197635   71620 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 22:19:23.197700   71620 ssh_runner.go:195] Run: which lz4
	I0318 22:19:23.202573   71620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 22:19:23.207448   71620 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 22:19:23.207478   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0318 22:19:24.858967   71620 crio.go:462] duration metric: took 1.656422878s to copy over tarball
	I0318 22:19:24.859037   71620 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 22:19:27.370693   71620 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.511633106s)
	I0318 22:19:27.370719   71620 crio.go:469] duration metric: took 2.511717764s to extract the tarball
	I0318 22:19:27.370728   71620 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 22:19:27.410569   71620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 22:19:27.456145   71620 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 22:19:27.456170   71620 cache_images.go:84] Images are preloaded, skipping loading
	I0318 22:19:27.456180   71620 kubeadm.go:928] updating node { 192.168.61.192 8443 v1.29.0-rc.2 crio true true} ...
	I0318 22:19:27.456311   71620 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-962491 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 22:19:27.456383   71620 ssh_runner.go:195] Run: crio config
	I0318 22:19:27.510343   71620 cni.go:84] Creating CNI manager for ""
	I0318 22:19:27.510364   71620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:19:27.510377   71620 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0318 22:19:27.510396   71620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.192 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-962491 NodeName:newest-cni-962491 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.61.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 22:19:27.510524   71620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-962491"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 22:19:27.510592   71620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 22:19:27.522216   71620 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 22:19:27.522292   71620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 22:19:27.533276   71620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0318 22:19:27.551654   71620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 22:19:27.569630   71620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0318 22:19:27.588008   71620 ssh_runner.go:195] Run: grep 192.168.61.192	control-plane.minikube.internal$ /etc/hosts
	I0318 22:19:27.592005   71620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 22:19:27.605523   71620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:19:27.740508   71620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:19:27.758708   71620 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491 for IP: 192.168.61.192
	I0318 22:19:27.758733   71620 certs.go:194] generating shared ca certs ...
	I0318 22:19:27.758751   71620 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:19:27.758921   71620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 22:19:27.758977   71620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 22:19:27.758992   71620 certs.go:256] generating profile certs ...
	I0318 22:19:27.759105   71620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/client.key
	I0318 22:19:27.759187   71620 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key.f1d464e4
	I0318 22:19:27.759250   71620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.key
	I0318 22:19:27.759418   71620 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 22:19:27.759459   71620 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 22:19:27.759472   71620 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 22:19:27.759545   71620 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 22:19:27.759592   71620 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 22:19:27.759622   71620 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 22:19:27.759686   71620 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 22:19:27.760260   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 22:19:27.787470   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 22:19:27.813983   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 22:19:27.843926   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 22:19:27.878394   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 22:19:27.916311   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 22:19:27.963160   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 22:19:27.991126   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/newest-cni-962491/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 22:19:28.018867   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 22:19:28.047405   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 22:19:28.074615   71620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 22:19:28.102441   71620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 22:19:28.121903   71620 ssh_runner.go:195] Run: openssl version
	I0318 22:19:28.128350   71620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 22:19:28.142508   71620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:19:28.147914   71620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:19:28.147971   71620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 22:19:28.154759   71620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 22:19:28.167271   71620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 22:19:28.179810   71620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 22:19:28.185337   71620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 22:19:28.185391   71620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 22:19:28.191990   71620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 22:19:28.205643   71620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 22:19:28.217920   71620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 22:19:28.223631   71620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 22:19:28.223699   71620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 22:19:28.230161   71620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 22:19:28.242735   71620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 22:19:28.248165   71620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 22:19:28.255035   71620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 22:19:28.261730   71620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 22:19:28.268266   71620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 22:19:28.274774   71620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 22:19:28.281184   71620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 22:19:28.287665   71620 kubeadm.go:391] StartCluster: {Name:newest-cni-962491 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-962491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 22:19:28.287734   71620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 22:19:28.287774   71620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 22:19:28.328775   71620 cri.go:89] found id: ""
	I0318 22:19:28.328849   71620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 22:19:28.341448   71620 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 22:19:28.341466   71620 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 22:19:28.341473   71620 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 22:19:28.341521   71620 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 22:19:28.352478   71620 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 22:19:28.353049   71620 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-962491" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:19:28.353414   71620 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-962491" cluster setting kubeconfig missing "newest-cni-962491" context setting]
	I0318 22:19:28.353921   71620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:19:28.428827   71620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 22:19:28.440560   71620 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.192
	I0318 22:19:28.440590   71620 kubeadm.go:1154] stopping kube-system containers ...
	I0318 22:19:28.440599   71620 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 22:19:28.440659   71620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 22:19:28.486114   71620 cri.go:89] found id: ""
	I0318 22:19:28.486189   71620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 22:19:28.504370   71620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:19:28.515416   71620 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:19:28.515441   71620 kubeadm.go:156] found existing configuration files:
	
	I0318 22:19:28.515493   71620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:19:28.525340   71620 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:19:28.525409   71620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:19:28.535563   71620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:19:28.545850   71620 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:19:28.545914   71620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:19:28.555827   71620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:19:28.565504   71620 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:19:28.565555   71620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:19:28.575530   71620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:19:28.584893   71620 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:19:28.584951   71620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:19:28.594464   71620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:19:28.604304   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 22:19:28.719303   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 22:19:30.004408   71620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.285048626s)
	I0318 22:19:30.004442   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 22:19:30.228973   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 22:19:30.318715   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 22:19:30.392264   71620 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:19:30.392454   71620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:19:30.893101   71620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:19:31.392846   71620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:19:31.435151   71620 api_server.go:72] duration metric: took 1.04288635s to wait for apiserver process to appear ...
	I0318 22:19:31.435208   71620 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:19:31.435231   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:31.435814   71620 api_server.go:269] stopped: https://192.168.61.192:8443/healthz: Get "https://192.168.61.192:8443/healthz": dial tcp 192.168.61.192:8443: connect: connection refused
	I0318 22:19:31.935589   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:34.372392   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 22:19:34.372419   71620 api_server.go:103] status: https://192.168.61.192:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 22:19:34.372445   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:34.391134   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 22:19:34.391159   71620 api_server.go:103] status: https://192.168.61.192:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 22:19:34.435274   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:34.449674   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 22:19:34.449706   71620 api_server.go:103] status: https://192.168.61.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 22:19:34.936127   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:34.941031   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 22:19:34.941056   71620 api_server.go:103] status: https://192.168.61.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 22:19:35.435721   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:35.440491   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 22:19:35.440519   71620 api_server.go:103] status: https://192.168.61.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 22:19:35.936135   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:35.940835   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 200:
	ok
	I0318 22:19:35.947805   71620 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:19:35.947832   71620 api_server.go:131] duration metric: took 4.512615159s to wait for apiserver health ...
	I0318 22:19:35.947843   71620 cni.go:84] Creating CNI manager for ""
	I0318 22:19:35.947852   71620 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:19:35.949902   71620 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:19:35.951373   71620 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:19:35.964973   71620 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:19:35.988512   71620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:19:36.000265   71620 system_pods.go:59] 8 kube-system pods found
	I0318 22:19:36.000297   71620 system_pods.go:61] "coredns-76f75df574-g5jjv" [687cc596-dd37-4f70-9ecb-ee8aaaf7c41f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:19:36.000304   71620 system_pods.go:61] "etcd-newest-cni-962491" [f31ec214-a5e9-407d-94cf-4e37ac4718c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 22:19:36.000311   71620 system_pods.go:61] "kube-apiserver-newest-cni-962491" [a043eb57-85c4-4d68-89ef-1984d6a3bbb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 22:19:36.000316   71620 system_pods.go:61] "kube-controller-manager-newest-cni-962491" [6eef7c13-1b3d-4b63-bc40-3e403c4ce05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 22:19:36.000322   71620 system_pods.go:61] "kube-proxy-sflx8" [69569787-fede-4db9-a9d5-cde02c19bed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:19:36.000329   71620 system_pods.go:61] "kube-scheduler-newest-cni-962491" [5ef2022d-ea75-4bea-b463-9232764f666e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 22:19:36.000334   71620 system_pods.go:61] "metrics-server-57f55c9bc5-jfl4g" [f9d1c747-fe5a-4e3e-86c1-bc386a568187] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:19:36.000339   71620 system_pods.go:61] "storage-provisioner" [7e360477-9be6-4993-b863-f87f26cd0b9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:19:36.000346   71620 system_pods.go:74] duration metric: took 11.808664ms to wait for pod list to return data ...
	I0318 22:19:36.000365   71620 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:19:36.003826   71620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:19:36.003856   71620 node_conditions.go:123] node cpu capacity is 2
	I0318 22:19:36.003868   71620 node_conditions.go:105] duration metric: took 3.497518ms to run NodePressure ...
	I0318 22:19:36.003890   71620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 22:19:36.286528   71620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:19:36.326633   71620 ops.go:34] apiserver oom_adj: -16
	I0318 22:19:36.326657   71620 kubeadm.go:591] duration metric: took 7.985177412s to restartPrimaryControlPlane
	I0318 22:19:36.326668   71620 kubeadm.go:393] duration metric: took 8.039006545s to StartCluster
	I0318 22:19:36.326687   71620 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:19:36.326757   71620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:19:36.327640   71620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:19:36.327865   71620 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.192 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:19:36.329818   71620 out.go:177] * Verifying Kubernetes components...
	I0318 22:19:36.327951   71620 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:19:36.328075   71620 config.go:182] Loaded profile config "newest-cni-962491": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 22:19:36.331520   71620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:19:36.331556   71620 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-962491"
	I0318 22:19:36.331566   71620 addons.go:69] Setting default-storageclass=true in profile "newest-cni-962491"
	I0318 22:19:36.331582   71620 addons.go:69] Setting metrics-server=true in profile "newest-cni-962491"
	I0318 22:19:36.331600   71620 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-962491"
	W0318 22:19:36.331612   71620 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:19:36.331617   71620 addons.go:234] Setting addon metrics-server=true in "newest-cni-962491"
	W0318 22:19:36.331639   71620 addons.go:243] addon metrics-server should already be in state true
	I0318 22:19:36.331646   71620 host.go:66] Checking if "newest-cni-962491" exists ...
	I0318 22:19:36.331685   71620 host.go:66] Checking if "newest-cni-962491" exists ...
	I0318 22:19:36.331607   71620 addons.go:69] Setting dashboard=true in profile "newest-cni-962491"
	I0318 22:19:36.331762   71620 addons.go:234] Setting addon dashboard=true in "newest-cni-962491"
	W0318 22:19:36.331776   71620 addons.go:243] addon dashboard should already be in state true
	I0318 22:19:36.331622   71620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-962491"
	I0318 22:19:36.331814   71620 host.go:66] Checking if "newest-cni-962491" exists ...
	I0318 22:19:36.332072   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.332072   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.332109   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.332212   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.332244   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.332259   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.332274   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.332287   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.348265   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0318 22:19:36.348849   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.349462   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.349489   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.349929   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.350548   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.350596   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.351498   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0318 22:19:36.351532   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0318 22:19:36.351934   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.352062   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.352471   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.352489   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.352608   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.352633   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.352942   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.353098   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.353620   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0318 22:19:36.353638   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.353665   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.354652   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.356192   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.356236   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.358063   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.358121   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.358553   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.358763   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:19:36.362099   71620 addons.go:234] Setting addon default-storageclass=true in "newest-cni-962491"
	W0318 22:19:36.362118   71620 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:19:36.362143   71620 host.go:66] Checking if "newest-cni-962491" exists ...
	I0318 22:19:36.362456   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.362488   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.371041   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I0318 22:19:36.371471   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.371968   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.371996   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.372350   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.372536   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:19:36.374471   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:36.376635   71620 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:19:36.378143   71620 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:19:36.378165   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:19:36.378179   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:36.377241   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0318 22:19:36.377271   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0318 22:19:36.378509   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0318 22:19:36.378749   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.378829   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.379245   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.379262   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.379279   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.379389   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.379407   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.379671   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.379728   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.379875   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:19:36.380147   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.380166   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.380174   71620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:19:36.380193   71620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:19:36.380561   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.380763   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:19:36.382003   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:36.383978   71620 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:19:36.382509   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.384024   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:36.382773   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:36.384058   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.383055   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:36.384220   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:36.385492   71620 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:19:36.385506   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:19:36.386722   71620 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0318 22:19:36.385523   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:36.385646   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:36.389191   71620 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0318 22:19:36.388163   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:36.390722   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0318 22:19:36.390736   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0318 22:19:36.390752   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:36.392614   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.393162   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:36.393182   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.393455   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:36.393612   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:36.393762   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.393787   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:36.394054   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:36.394348   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:36.394406   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:36.394421   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.394483   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:36.394630   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:36.394839   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:36.398692   71620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43887
	I0318 22:19:36.399007   71620 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:19:36.399422   71620 main.go:141] libmachine: Using API Version  1
	I0318 22:19:36.399441   71620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:19:36.399721   71620 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:19:36.399888   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetState
	I0318 22:19:36.401136   71620 main.go:141] libmachine: (newest-cni-962491) Calling .DriverName
	I0318 22:19:36.401366   71620 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:19:36.401380   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:19:36.401394   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHHostname
	I0318 22:19:36.403282   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.403600   71620 main.go:141] libmachine: (newest-cni-962491) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:88:16", ip: ""} in network mk-newest-cni-962491: {Iface:virbr2 ExpiryTime:2024-03-18 23:19:11 +0000 UTC Type:0 Mac:52:54:00:0a:88:16 Iaid: IPaddr:192.168.61.192 Prefix:24 Hostname:newest-cni-962491 Clientid:01:52:54:00:0a:88:16}
	I0318 22:19:36.403626   71620 main.go:141] libmachine: (newest-cni-962491) DBG | domain newest-cni-962491 has defined IP address 192.168.61.192 and MAC address 52:54:00:0a:88:16 in network mk-newest-cni-962491
	I0318 22:19:36.403707   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHPort
	I0318 22:19:36.403887   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHKeyPath
	I0318 22:19:36.404031   71620 main.go:141] libmachine: (newest-cni-962491) Calling .GetSSHUsername
	I0318 22:19:36.404210   71620 sshutil.go:53] new ssh client: &{IP:192.168.61.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/newest-cni-962491/id_rsa Username:docker}
	I0318 22:19:36.616162   71620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:19:36.670610   71620 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:19:36.670706   71620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:19:36.756066   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0318 22:19:36.756086   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0318 22:19:36.781511   71620 api_server.go:72] duration metric: took 453.612099ms to wait for apiserver process to appear ...
	I0318 22:19:36.781542   71620 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:19:36.781564   71620 api_server.go:253] Checking apiserver healthz at https://192.168.61.192:8443/healthz ...
	I0318 22:19:36.787653   71620 api_server.go:279] https://192.168.61.192:8443/healthz returned 200:
	ok
	I0318 22:19:36.792111   71620 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:19:36.792132   71620 api_server.go:131] duration metric: took 10.583289ms to wait for apiserver health ...
	I0318 22:19:36.792139   71620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:19:36.795325   71620 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:19:36.795341   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:19:36.813941   71620 system_pods.go:59] 8 kube-system pods found
	I0318 22:19:36.813969   71620 system_pods.go:61] "coredns-76f75df574-g5jjv" [687cc596-dd37-4f70-9ecb-ee8aaaf7c41f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:19:36.813977   71620 system_pods.go:61] "etcd-newest-cni-962491" [f31ec214-a5e9-407d-94cf-4e37ac4718c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 22:19:36.813986   71620 system_pods.go:61] "kube-apiserver-newest-cni-962491" [a043eb57-85c4-4d68-89ef-1984d6a3bbb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 22:19:36.814002   71620 system_pods.go:61] "kube-controller-manager-newest-cni-962491" [6eef7c13-1b3d-4b63-bc40-3e403c4ce05b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 22:19:36.814008   71620 system_pods.go:61] "kube-proxy-sflx8" [69569787-fede-4db9-a9d5-cde02c19bed6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:19:36.814015   71620 system_pods.go:61] "kube-scheduler-newest-cni-962491" [5ef2022d-ea75-4bea-b463-9232764f666e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 22:19:36.814020   71620 system_pods.go:61] "metrics-server-57f55c9bc5-jfl4g" [f9d1c747-fe5a-4e3e-86c1-bc386a568187] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:19:36.814025   71620 system_pods.go:61] "storage-provisioner" [7e360477-9be6-4993-b863-f87f26cd0b9c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:19:36.814051   71620 system_pods.go:74] duration metric: took 21.905225ms to wait for pod list to return data ...
	I0318 22:19:36.814058   71620 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:19:36.822336   71620 default_sa.go:45] found service account: "default"
	I0318 22:19:36.822357   71620 default_sa.go:55] duration metric: took 8.293093ms for default service account to be created ...
	I0318 22:19:36.822368   71620 kubeadm.go:576] duration metric: took 494.478029ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 22:19:36.822382   71620 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:19:36.830696   71620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:19:36.830717   71620 node_conditions.go:123] node cpu capacity is 2
	I0318 22:19:36.830726   71620 node_conditions.go:105] duration metric: took 8.339107ms to run NodePressure ...
	I0318 22:19:36.830737   71620 start.go:240] waiting for startup goroutines ...
	I0318 22:19:36.849492   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0318 22:19:36.849515   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0318 22:19:36.866169   71620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:19:36.867219   71620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:19:36.921223   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0318 22:19:36.921253   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0318 22:19:36.925232   71620 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:19:36.925250   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:19:37.000394   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0318 22:19:37.000423   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0318 22:19:37.057421   71620 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:19:37.057444   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:19:37.112161   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0318 22:19:37.112188   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0318 22:19:37.152734   71620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:19:37.176378   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0318 22:19:37.176403   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0318 22:19:37.236650   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0318 22:19:37.236675   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0318 22:19:37.331275   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0318 22:19:37.331299   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0318 22:19:37.491801   71620 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0318 22:19:37.491826   71620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0318 22:19:37.594876   71620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0318 22:19:38.942476   71620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.07626668s)
	I0318 22:19:38.942480   71620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.075232879s)
	I0318 22:19:38.942527   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:38.942543   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:38.942560   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:38.942580   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:38.942923   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Closing plugin on server side
	I0318 22:19:38.942930   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:38.942997   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:38.943006   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:38.943015   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:38.942958   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:38.943077   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:38.943086   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:38.943093   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:38.942967   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Closing plugin on server side
	I0318 22:19:38.943217   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:38.943231   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:38.943339   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Closing plugin on server side
	I0318 22:19:38.943361   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:38.943378   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:38.952026   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:38.952045   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:38.952297   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Closing plugin on server side
	I0318 22:19:38.952327   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:38.952336   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:39.052365   71620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.899569173s)
	I0318 22:19:39.052413   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:39.052426   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:39.052771   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:39.052781   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Closing plugin on server side
	I0318 22:19:39.052791   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:39.052801   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:39.052809   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:39.053058   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:39.053072   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:39.053084   71620 addons.go:470] Verifying addon metrics-server=true in "newest-cni-962491"
	I0318 22:19:39.308633   71620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.713698329s)
	I0318 22:19:39.308705   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:39.308722   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:39.309074   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:39.309092   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:39.309102   71620 main.go:141] libmachine: Making call to close driver server
	I0318 22:19:39.309111   71620 main.go:141] libmachine: (newest-cni-962491) Calling .Close
	I0318 22:19:39.309359   71620 main.go:141] libmachine: (newest-cni-962491) DBG | Closing plugin on server side
	I0318 22:19:39.309418   71620 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:19:39.309436   71620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:19:39.310930   71620 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-962491 addons enable metrics-server
	
	I0318 22:19:39.312324   71620 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0318 22:19:39.313801   71620 addons.go:505] duration metric: took 2.985860706s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0318 22:19:39.313831   71620 start.go:245] waiting for cluster config update ...
	I0318 22:19:39.313847   71620 start.go:254] writing updated cluster config ...
	I0318 22:19:39.314083   71620 ssh_runner.go:195] Run: rm -f paused
	I0318 22:19:39.363814   71620 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:19:39.365569   71620 out.go:177] * Done! kubectl is now configured to use "newest-cni-962491" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.502774830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800384502749383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aadbea42-7561-4361-afa3-90dc39b6f7e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.503576698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a000f46e-dd21-468a-a6d6-deeedc45dddf name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.503650190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a000f46e-dd21-468a-a6d6-deeedc45dddf name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.503806104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a000f46e-dd21-468a-a6d6-deeedc45dddf name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.545342516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61ff2057-58b8-468c-bd25-a1acb386fd85 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.545441515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61ff2057-58b8-468c-bd25-a1acb386fd85 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.547058281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f96ddc4f-7c4a-4c69-94da-54d709da4db7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.547559204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800384547535757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f96ddc4f-7c4a-4c69-94da-54d709da4db7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.548007376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2118ca6-4be1-4658-be4e-1bf17881eeea name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.548076977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2118ca6-4be1-4658-be4e-1bf17881eeea name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.548340343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2118ca6-4be1-4658-be4e-1bf17881eeea name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.587307768Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a0ce819-71c8-4b52-97a2-51f4fa89f96b name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.587406912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a0ce819-71c8-4b52-97a2-51f4fa89f96b name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.588840353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d73670c5-e4c9-44ce-b452-8f6ed8c4f36c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.589546369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800384589521885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d73670c5-e4c9-44ce-b452-8f6ed8c4f36c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.590315143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca08efdb-faaa-4454-898e-336a78f69424 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.590419210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca08efdb-faaa-4454-898e-336a78f69424 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.590611208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca08efdb-faaa-4454-898e-336a78f69424 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.633704229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d574e669-927b-4058-bc92-3c9bb20f2f72 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.633833416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d574e669-927b-4058-bc92-3c9bb20f2f72 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.635639271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2077fb1-66db-478a-9f46-81021a721a0c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.636021339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800384635998795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2077fb1-66db-478a-9f46-81021a721a0c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.637069431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9488f916-713c-4ad3-93d1-8f5720914036 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.637125566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9488f916-713c-4ad3-93d1-8f5720914036 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:19:44 default-k8s-diff-port-660775 crio[701]: time="2024-03-18 22:19:44.637401784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065,PodSandboxId:3f55da2b5c15761e726f21b507676b165fbe8a2f989e8bafcf82e204aa0b1816,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710799493824941169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e9b588-fe14-44a7-9dfb-fb40ce72133f,},Annotations:map[string]string{io.kubernetes.container.hash: d558197d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3,PodSandboxId:6d4ae8907100951ac704256f306d62cb27c9b0957cf0d72ce4a8281fe89502b5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710799491536690395,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z2dsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f8591de-c0b4-4e0b-9e4f-623b58a59d08,},Annotations:map[string]string{io.kubernetes.container.hash: 33837b02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234,PodSandboxId:78f8818eee8078cfe063d8fe371fb720fd2775b6cfb7eab1f1c269b9a551250b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491912021059,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vmj4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4916e690-e21f-4eae-aa11-74ad6c0b7f49,},Annotations:map[string]string{io.kubernetes.container.hash: ee7bf581,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d,PodSandboxId:629ba784158f9bb36e35109c1aef502f048dc0e02249c046151d43cf55eae5d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710799491804871223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-55f9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce919323-edf8-4caf-8952-
2ec4ac6593cd,},Annotations:map[string]string{io.kubernetes.container.hash: 88fcea70,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f,PodSandboxId:7cdb005a3fdea312c627a648d05cb88d6ad569e83492c818c137dc291d4c4d43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171079947232882278
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b6c0b6afd72a266c450fb622ac71f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59,PodSandboxId:c3a85fc998173bebe1cbcbbf16aae1dc581fc58975bdaae9d3c44168bd656695,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710799472341440611,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e45b2b94387c62dd81fdf4957bbadb1,},Annotations:map[string]string{io.kubernetes.container.hash: 73776432,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356,PodSandboxId:9b5c70c76fc3c1271d0324f72c2b7f69945a51243c99e2a9ba0a95986910c6fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710799472255789214,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 918a47c6af70c24caefa867aa7cc8e18,},Annotations:map[string]string{io.kubernetes.container.hash: 220cd580,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e,PodSandboxId:a614bb3c9d940bd19b550b4d09066b0f45773b4aff3b6e0d3b0ad7887e1ff60a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710799472155355672,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-660775,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4e419c396595b17710729817eddcd7c,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9488f916-713c-4ad3-93d1-8f5720914036 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a2e24a3274d6b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   3f55da2b5c157       storage-provisioner
	38adbbaa34644       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   78f8818eee807       coredns-5dd5756b68-vmj4l
	f2c789f4cbe4d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   629ba784158f9       coredns-5dd5756b68-55f9q
	a0d2f16a4b499       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   6d4ae89071009       kube-proxy-z2dsq
	e74044536d4b3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   c3a85fc998173       etcd-default-k8s-diff-port-660775
	e86c29661f633       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   7cdb005a3fdea       kube-scheduler-default-k8s-diff-port-660775
	3e36060a89811       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   9b5c70c76fc3c       kube-apiserver-default-k8s-diff-port-660775
	19ea785e0f2a7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   a614bb3c9d940       kube-controller-manager-default-k8s-diff-port-660775
	
	
	==> coredns [38adbbaa34644798b7ad1241b343870301064e591a6c6ad83abbd38e3899c234] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [f2c789f4cbe4d1d2c151c7e53af91746005f46481fa4aa49bece042881419d3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-660775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-660775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76
	                    minikube.k8s.io/name=default-k8s-diff-port-660775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 22:04:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-660775
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 22:19:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 22:15:11 +0000   Mon, 18 Mar 2024 22:04:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 22:15:11 +0000   Mon, 18 Mar 2024 22:04:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 22:15:11 +0000   Mon, 18 Mar 2024 22:04:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 22:15:11 +0000   Mon, 18 Mar 2024 22:04:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.150
	  Hostname:    default-k8s-diff-port-660775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff954e8ff45d4b5f9e4b6dac58acdc14
	  System UUID:                ff954e8f-f45d-4b5f-9e4b-6dac58acdc14
	  Boot ID:                    09e2df0a-9467-437a-ba40-c1638b1ff79b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-55f9q                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-vmj4l                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-660775                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-660775             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-660775    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-z2dsq                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-660775             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-x2jjj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node default-k8s-diff-port-660775 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node default-k8s-diff-port-660775 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node default-k8s-diff-port-660775 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node default-k8s-diff-port-660775 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node default-k8s-diff-port-660775 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node default-k8s-diff-port-660775 event: Registered Node default-k8s-diff-port-660775 in Controller
	
	
	==> dmesg <==
	[  +0.049147] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.862607] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.824242] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.765009] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.764150] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.057791] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066275] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.217827] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.137553] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.344970] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +5.875738] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +0.070392] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.399398] systemd-fstab-generator[907]: Ignoring "noauto" option for root device
	[  +5.624149] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.027072] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 22:00] kauditd_printk_skb: 2 callbacks suppressed
	[Mar18 22:04] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.889085] systemd-fstab-generator[3426]: Ignoring "noauto" option for root device
	[  +4.854763] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.445049] systemd-fstab-generator[3751]: Ignoring "noauto" option for root device
	[ +12.501156] systemd-fstab-generator[3955]: Ignoring "noauto" option for root device
	[  +0.105442] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 22:05] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [e74044536d4b37350533c1c152b0eaab268177ac8e6ca480e0e64f2bd89aec59] <==
	{"level":"info","ts":"2024-03-18T22:04:33.071315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T22:04:33.071374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T22:04:33.071485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab received MsgPreVoteResp from d4c80b78635351ab at term 1"}
	{"level":"info","ts":"2024-03-18T22:04:33.0715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.071509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab received MsgVoteResp from d4c80b78635351ab at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.07152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c80b78635351ab became leader at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.071528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c80b78635351ab elected leader d4c80b78635351ab at term 2"}
	{"level":"info","ts":"2024-03-18T22:04:33.077449Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d4c80b78635351ab","local-member-attributes":"{Name:default-k8s-diff-port-660775 ClientURLs:[https://192.168.50.150:2379]}","request-path":"/0/members/d4c80b78635351ab/attributes","cluster-id":"d8323e35ed60dfee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T22:04:33.077594Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:04:33.080273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T22:04:33.080767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.150:2379"}
	{"level":"info","ts":"2024-03-18T22:04:33.080919Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.083858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T22:04:33.084467Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d8323e35ed60dfee","local-member-id":"d4c80b78635351ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.090569Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.090653Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T22:04:33.093271Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T22:04:33.095239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T22:14:33.467258Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":672}
	{"level":"info","ts":"2024-03-18T22:14:33.469835Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":672,"took":"2.236189ms","hash":3710454153}
	{"level":"info","ts":"2024-03-18T22:14:33.469902Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710454153,"revision":672,"compact-revision":-1}
	{"level":"info","ts":"2024-03-18T22:18:20.843299Z","caller":"traceutil/trace.go:171","msg":"trace[1071761555] transaction","detail":"{read_only:false; response_revision:1101; number_of_response:1; }","duration":"243.733797ms","start":"2024-03-18T22:18:20.599514Z","end":"2024-03-18T22:18:20.843248Z","steps":["trace[1071761555] 'process raft request'  (duration: 243.500621ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T22:19:33.476244Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":916}
	{"level":"info","ts":"2024-03-18T22:19:33.47902Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":916,"took":"2.011638ms","hash":1502595899}
	{"level":"info","ts":"2024-03-18T22:19:33.47928Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1502595899,"revision":916,"compact-revision":672}
	
	
	==> kernel <==
	 22:19:45 up 20 min,  0 users,  load average: 0.06, 0.14, 0.14
	Linux default-k8s-diff-port-660775 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e36060a89811c86f7fb399b87a21a7f4e071c22502bf887c55d3f6dd60df356] <==
	W0318 22:15:36.208420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:15:36.208497       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:15:36.210125       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:16:35.083832       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 22:17:35.084306       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:17:36.209100       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:17:36.209325       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:17:36.209361       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:17:36.210603       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:17:36.210711       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:17:36.210742       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 22:18:35.084077       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 22:19:35.084105       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:19:35.212083       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:19:35.212322       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:19:35.212911       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 22:19:36.212563       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:19:36.212672       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 22:19:36.212700       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 22:19:36.212675       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 22:19:36.212872       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 22:19:36.213918       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [19ea785e0f2a782a80170ad054782ba3c029b9aa6c5904d4fd5e71f8bf1a736e] <==
	I0318 22:13:50.860597       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:14:20.380433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:14:20.873099       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:14:50.387400       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:14:50.880997       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:15:20.393833       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:15:20.893419       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:15:50.399796       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:15:50.903546       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 22:16:01.520081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="227.05µs"
	I0318 22:16:15.517638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="102.52µs"
	E0318 22:16:20.406566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:16:20.911699       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:16:50.413392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:16:50.921955       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:17:20.420021       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:17:20.931986       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:17:50.428814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:17:50.941143       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:18:20.435120       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:18:20.953754       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:18:50.441329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:18:50.964714       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 22:19:20.447288       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 22:19:20.975804       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a0d2f16a4b49971754f503a7e347bfc6ae8349f3f59d06cd774fcbb8bdf5cde3] <==
	I0318 22:04:52.904079       1 server_others.go:69] "Using iptables proxy"
	I0318 22:04:52.925320       1 node.go:141] Successfully retrieved node IP: 192.168.50.150
	I0318 22:04:53.057590       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 22:04:53.057616       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 22:04:53.065116       1 server_others.go:152] "Using iptables Proxier"
	I0318 22:04:53.067375       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 22:04:53.067639       1 server.go:846] "Version info" version="v1.28.4"
	I0318 22:04:53.067649       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 22:04:53.075434       1 config.go:188] "Starting service config controller"
	I0318 22:04:53.081427       1 config.go:97] "Starting endpoint slice config controller"
	I0318 22:04:53.081439       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 22:04:53.081565       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 22:04:53.081595       1 config.go:315] "Starting node config controller"
	I0318 22:04:53.081731       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 22:04:53.184339       1 shared_informer.go:318] Caches are synced for node config
	I0318 22:04:53.185945       1 shared_informer.go:318] Caches are synced for service config
	I0318 22:04:53.192134       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e86c29661f633db236124250b0c8286fbaece495ab5df550b92116aee104014f] <==
	W0318 22:04:35.238787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 22:04:35.238823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:35.238872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 22:04:35.238881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 22:04:35.238934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:35.238990       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:04:35.239029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:04:35.239096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:35.242391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 22:04:35.242440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 22:04:35.242493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:04:35.242502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:04:36.068710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 22:04:36.068772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 22:04:36.118101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 22:04:36.118252       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 22:04:36.123586       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 22:04:36.124027       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 22:04:36.251566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 22:04:36.251621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 22:04:36.336001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 22:04:36.336054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 22:04:36.454745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 22:04:36.454798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0318 22:04:38.615769       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 22:17:30 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:17:30.500750    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:17:38 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:17:38.619031    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:17:38 default-k8s-diff-port-660775 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:17:38 default-k8s-diff-port-660775 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:17:38 default-k8s-diff-port-660775 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:17:38 default-k8s-diff-port-660775 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:17:44 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:17:44.506129    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:17:59 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:17:59.500242    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:18:13 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:18:13.499267    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:18:28 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:18:28.499872    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:18:38 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:18:38.620927    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:18:38 default-k8s-diff-port-660775 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:18:38 default-k8s-diff-port-660775 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:18:38 default-k8s-diff-port-660775 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:18:38 default-k8s-diff-port-660775 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 22:18:42 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:18:42.499596    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:18:53 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:18:53.499535    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:19:08 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:19:08.501005    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:19:22 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:19:22.502152    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:19:37 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:19:37.499932    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-x2jjj" podUID="567c40f1-097b-4813-8aab-efbfbe1657bb"
	Mar 18 22:19:38 default-k8s-diff-port-660775 kubelet[3758]: E0318 22:19:38.624881    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 22:19:38 default-k8s-diff-port-660775 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 22:19:38 default-k8s-diff-port-660775 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 22:19:38 default-k8s-diff-port-660775 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 22:19:38 default-k8s-diff-port-660775 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a2e24a3274d6bbbfd06dd17ab7449edda8425ea4740be92b5aa5ff92833fd065] <==
	I0318 22:04:53.915540       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 22:04:53.929303       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 22:04:53.929518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 22:04:53.940732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 22:04:53.941635       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ef6aed3-5e93-45a8-b487-ab6fa74c09b5", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-660775_bee4e015-a584-400a-b2fb-771f58fdd9d4 became leader
	I0318 22:04:53.941742       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-660775_bee4e015-a584-400a-b2fb-771f58fdd9d4!
	I0318 22:04:54.042903       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-660775_bee4e015-a584-400a-b2fb-771f58fdd9d4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-x2jjj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 describe pod metrics-server-57f55c9bc5-x2jjj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-660775 describe pod metrics-server-57f55c9bc5-x2jjj: exit status 1 (64.848905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-x2jjj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-660775 describe pod metrics-server-57f55c9bc5-x2jjj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (347.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (101.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:16:51.608070   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
E0318 22:17:13.470727   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.111:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.111:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (240.190265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-648232" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-648232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-648232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.712µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-648232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (233.64406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-648232 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-648232 logs -n 25: (1.598724684s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-389288 sudo cat                              | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo                                  | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo find                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-389288 sudo crio                             | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-389288                                       | bridge-389288                | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	| delete  | -p                                                     | disable-driver-mounts-369155 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:49 UTC |
	|         | disable-driver-mounts-369155                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:49 UTC | 18 Mar 24 21:50 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-660775  | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC |                     |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-141758            | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:50 UTC | 18 Mar 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-963041             | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC | 18 Mar 24 21:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-648232        | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:52 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-660775       | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-141758                 | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-660775 | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:04 UTC |
	|         | default-k8s-diff-port-660775                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-141758                                  | embed-certs-141758           | jenkins | v1.32.0 | 18 Mar 24 21:53 UTC | 18 Mar 24 22:03 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-963041                  | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-648232             | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 21:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-648232                              | old-k8s-version-648232       | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-963041                                   | no-preload-963041            | jenkins | v1.32.0 | 18 Mar 24 21:54 UTC | 18 Mar 24 22:04 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 21:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 21:54:36.607114   65699 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:54:36.607254   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607266   65699 out.go:304] Setting ErrFile to fd 2...
	I0318 21:54:36.607272   65699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:54:36.607706   65699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:54:36.608596   65699 out.go:298] Setting JSON to false
	I0318 21:54:36.609468   65699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5821,"bootTime":1710793056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:54:36.609529   65699 start.go:139] virtualization: kvm guest
	I0318 21:54:36.611401   65699 out.go:177] * [no-preload-963041] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:54:36.612703   65699 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:54:36.612704   65699 notify.go:220] Checking for updates...
	I0318 21:54:36.613976   65699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:54:36.615157   65699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:54:36.616283   65699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:54:36.617431   65699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:54:36.618615   65699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:54:36.620094   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:54:36.620490   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.620537   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.634914   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0318 21:54:36.635251   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.635706   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.635728   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.636019   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.636173   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.636411   65699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:54:36.636719   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:54:36.636756   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:54:36.650608   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0318 21:54:36.650946   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:54:36.651358   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:54:36.651383   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:54:36.651694   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:54:36.651832   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:54:36.682407   65699 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 21:54:36.683826   65699 start.go:297] selected driver: kvm2
	I0318 21:54:36.683837   65699 start.go:901] validating driver "kvm2" against &{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.683941   65699 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:54:36.684624   65699 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.684696   65699 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 21:54:36.699415   65699 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 21:54:36.699766   65699 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 21:54:36.699827   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:54:36.699840   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:54:36.699883   65699 start.go:340] cluster config:
	{Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:54:36.699984   65699 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.701584   65699 out.go:177] * Starting "no-preload-963041" primary control-plane node in "no-preload-963041" cluster
	I0318 21:54:36.702792   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:54:36.702911   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:54:36.703027   65699 cache.go:107] acquiring lock: {Name:mk20bcc8d34b80cc44c1e33bc5e0ec5cd82ba46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703044   65699 cache.go:107] acquiring lock: {Name:mk299438a86024ea6c96280d8bbe30c1283fa996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703087   65699 cache.go:107] acquiring lock: {Name:mkf5facbc69c16807f75e75a80a4afa3f97a0ecc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703124   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0318 21:54:36.703127   65699 start.go:360] acquireMachinesLock for no-preload-963041: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:54:36.703141   65699 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 102.209µs
	I0318 21:54:36.703156   65699 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0318 21:54:36.703104   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 21:54:36.703174   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0318 21:54:36.703172   65699 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.262µs
	I0318 21:54:36.703190   65699 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703043   65699 cache.go:107] acquiring lock: {Name:mk4c82b4e60b551671fa99921294b8e1f551d382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703189   65699 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 104.037µs
	I0318 21:54:36.703209   65699 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0318 21:54:36.703137   65699 cache.go:107] acquiring lock: {Name:mk847ac7ddb8863389782289e61001579ff6ec5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703204   65699 cache.go:107] acquiring lock: {Name:mk1bf8cc3e30a7cf88f25697f1021501ea6ee4ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703243   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 21:54:36.703254   65699 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 163.57µs
	I0318 21:54:36.703233   65699 cache.go:107] acquiring lock: {Name:mkf9c9b33c4d1ca54e3364ad39dcd3b10bc50534 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703265   65699 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703224   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 21:54:36.703282   65699 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 247.672µs
	I0318 21:54:36.703293   65699 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 21:54:36.703293   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 21:54:36.703293   65699 cache.go:107] acquiring lock: {Name:mkd0bd00e6f69df37097a8ce792bcc8844efbc5a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 21:54:36.703315   65699 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 156.33µs
	I0318 21:54:36.703329   65699 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 21:54:36.703363   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 21:54:36.703385   65699 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 207.404µs
	I0318 21:54:36.703400   65699 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703411   65699 cache.go:115] /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 21:54:36.703419   65699 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 164.5µs
	I0318 21:54:36.703435   65699 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 21:54:36.703447   65699 cache.go:87] Successfully saved all images to host disk.
	I0318 21:54:40.421098   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:43.493261   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:49.573105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:52.645158   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:54:58.725124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:01.797077   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:07.877116   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:10.949096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:17.029117   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:20.101131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:26.181141   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:29.253113   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:35.333097   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:38.405132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:44.485208   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:47.557123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:53.637185   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:55:56.709102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:02.789134   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:05.861146   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:11.941102   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:15.013092   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:21.093132   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:24.165129   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:30.245127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:33.317151   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:39.397126   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:42.469163   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:48.549145   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:51.621085   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:56:57.701118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:00.773108   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:06.853105   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:09.925096   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:16.005131   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:19.077111   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:25.157130   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:28.229107   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:34.309152   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:37.381127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:43.461123   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:46.533127   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:52.613124   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:57:55.685135   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:01.765118   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:04.837197   65170 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.150:22: connect: no route to host
	I0318 21:58:07.840986   65211 start.go:364] duration metric: took 4m36.169318619s to acquireMachinesLock for "embed-certs-141758"
	I0318 21:58:07.841046   65211 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:07.841054   65211 fix.go:54] fixHost starting: 
	I0318 21:58:07.841507   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:07.841544   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:07.856544   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43199
	I0318 21:58:07.856976   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:07.857424   65211 main.go:141] libmachine: Using API Version  1
	I0318 21:58:07.857452   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:07.857783   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:07.857971   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:07.858126   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 21:58:07.859909   65211 fix.go:112] recreateIfNeeded on embed-certs-141758: state=Stopped err=<nil>
	I0318 21:58:07.859947   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	W0318 21:58:07.860120   65211 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:07.862134   65211 out.go:177] * Restarting existing kvm2 VM for "embed-certs-141758" ...
	I0318 21:58:07.838706   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:07.838746   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839036   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:58:07.839060   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:58:07.839263   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:58:07.840867   65170 machine.go:97] duration metric: took 4m37.426711052s to provisionDockerMachine
	I0318 21:58:07.840915   65170 fix.go:56] duration metric: took 4m37.446713188s for fixHost
	I0318 21:58:07.840923   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 4m37.446748943s
	W0318 21:58:07.840945   65170 start.go:713] error starting host: provision: host is not running
	W0318 21:58:07.841017   65170 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 21:58:07.841026   65170 start.go:728] Will try again in 5 seconds ...
	I0318 21:58:07.863352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Start
	I0318 21:58:07.863483   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring networks are active...
	I0318 21:58:07.864202   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network default is active
	I0318 21:58:07.864652   65211 main.go:141] libmachine: (embed-certs-141758) Ensuring network mk-embed-certs-141758 is active
	I0318 21:58:07.865077   65211 main.go:141] libmachine: (embed-certs-141758) Getting domain xml...
	I0318 21:58:07.865858   65211 main.go:141] libmachine: (embed-certs-141758) Creating domain...
	I0318 21:58:09.026367   65211 main.go:141] libmachine: (embed-certs-141758) Waiting to get IP...
	I0318 21:58:09.027144   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.027524   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.027580   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.027503   66223 retry.go:31] will retry after 260.499882ms: waiting for machine to come up
	I0318 21:58:09.289935   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.290490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.290522   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.290450   66223 retry.go:31] will retry after 328.000758ms: waiting for machine to come up
	I0318 21:58:09.619947   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:09.620337   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:09.620384   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:09.620305   66223 retry.go:31] will retry after 419.640035ms: waiting for machine to come up
	I0318 21:58:10.041775   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.042186   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.042213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.042134   66223 retry.go:31] will retry after 482.732439ms: waiting for machine to come up
	I0318 21:58:10.526892   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:10.527282   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:10.527307   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:10.527253   66223 retry.go:31] will retry after 718.696645ms: waiting for machine to come up
	I0318 21:58:11.247165   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.247545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.247571   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.247501   66223 retry.go:31] will retry after 603.951593ms: waiting for machine to come up
	I0318 21:58:12.842928   65170 start.go:360] acquireMachinesLock for default-k8s-diff-port-660775: {Name:mk09e3a69e52057e605334a45d2c691f6518c279 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 21:58:11.853119   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:11.853408   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:11.853438   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:11.853362   66223 retry.go:31] will retry after 1.191963995s: waiting for machine to come up
	I0318 21:58:13.046915   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:13.047289   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:13.047319   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:13.047237   66223 retry.go:31] will retry after 1.314666633s: waiting for machine to come up
	I0318 21:58:14.363693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:14.364109   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:14.364135   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:14.364064   66223 retry.go:31] will retry after 1.341191632s: waiting for machine to come up
	I0318 21:58:15.707425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:15.707921   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:15.707951   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:15.707862   66223 retry.go:31] will retry after 1.887572842s: waiting for machine to come up
	I0318 21:58:17.596545   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:17.596970   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:17.597002   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:17.596899   66223 retry.go:31] will retry after 2.820006704s: waiting for machine to come up
	I0318 21:58:20.420327   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:20.420693   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:20.420714   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:20.420659   66223 retry.go:31] will retry after 3.099836206s: waiting for machine to come up
	I0318 21:58:23.522155   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:23.522490   65211 main.go:141] libmachine: (embed-certs-141758) DBG | unable to find current IP address of domain embed-certs-141758 in network mk-embed-certs-141758
	I0318 21:58:23.522517   65211 main.go:141] libmachine: (embed-certs-141758) DBG | I0318 21:58:23.522450   66223 retry.go:31] will retry after 4.512794132s: waiting for machine to come up
	I0318 21:58:29.414007   65622 start.go:364] duration metric: took 3m59.339882587s to acquireMachinesLock for "old-k8s-version-648232"
	I0318 21:58:29.414072   65622 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:29.414080   65622 fix.go:54] fixHost starting: 
	I0318 21:58:29.414429   65622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:29.414462   65622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:29.431057   65622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0318 21:58:29.431482   65622 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:29.432042   65622 main.go:141] libmachine: Using API Version  1
	I0318 21:58:29.432067   65622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:29.432376   65622 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:29.432568   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:29.432725   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetState
	I0318 21:58:29.433956   65622 fix.go:112] recreateIfNeeded on old-k8s-version-648232: state=Stopped err=<nil>
	I0318 21:58:29.433996   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	W0318 21:58:29.434155   65622 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:29.436328   65622 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-648232" ...
	I0318 21:58:29.437884   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .Start
	I0318 21:58:29.438022   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring networks are active...
	I0318 21:58:29.438616   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network default is active
	I0318 21:58:29.438967   65622 main.go:141] libmachine: (old-k8s-version-648232) Ensuring network mk-old-k8s-version-648232 is active
	I0318 21:58:29.439362   65622 main.go:141] libmachine: (old-k8s-version-648232) Getting domain xml...
	I0318 21:58:29.440065   65622 main.go:141] libmachine: (old-k8s-version-648232) Creating domain...
	I0318 21:58:28.036425   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.036898   65211 main.go:141] libmachine: (embed-certs-141758) Found IP for machine: 192.168.39.243
	I0318 21:58:28.036949   65211 main.go:141] libmachine: (embed-certs-141758) Reserving static IP address...
	I0318 21:58:28.036967   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has current primary IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.037428   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.037452   65211 main.go:141] libmachine: (embed-certs-141758) DBG | skip adding static IP to network mk-embed-certs-141758 - found existing host DHCP lease matching {name: "embed-certs-141758", mac: "52:54:00:10:20:63", ip: "192.168.39.243"}
	I0318 21:58:28.037461   65211 main.go:141] libmachine: (embed-certs-141758) Reserved static IP address: 192.168.39.243
	I0318 21:58:28.037473   65211 main.go:141] libmachine: (embed-certs-141758) Waiting for SSH to be available...
	I0318 21:58:28.037485   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Getting to WaitForSSH function...
	I0318 21:58:28.039459   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039778   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.039810   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.039928   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH client type: external
	I0318 21:58:28.039955   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa (-rw-------)
	I0318 21:58:28.039995   65211 main.go:141] libmachine: (embed-certs-141758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:28.040027   65211 main.go:141] libmachine: (embed-certs-141758) DBG | About to run SSH command:
	I0318 21:58:28.040044   65211 main.go:141] libmachine: (embed-certs-141758) DBG | exit 0
	I0318 21:58:28.169219   65211 main.go:141] libmachine: (embed-certs-141758) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:28.169554   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetConfigRaw
	I0318 21:58:28.170153   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.172372   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.172760   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.172787   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.173016   65211 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/config.json ...
	I0318 21:58:28.173186   65211 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:28.173203   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:28.173399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.175433   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175767   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.175802   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.175920   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.176079   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.176389   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.176553   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.176790   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.176805   65211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:28.285370   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:28.285407   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285629   65211 buildroot.go:166] provisioning hostname "embed-certs-141758"
	I0318 21:58:28.285651   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.285856   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.288382   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288708   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.288739   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.288863   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.289067   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289220   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.289361   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.289515   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.289717   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.289735   65211 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-141758 && echo "embed-certs-141758" | sudo tee /etc/hostname
	I0318 21:58:28.420311   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-141758
	
	I0318 21:58:28.420351   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.422864   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423213   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.423245   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.423431   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.423608   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423759   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.423891   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.424044   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.424234   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.424256   65211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-141758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-141758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-141758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:28.549277   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:28.549307   65211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:28.549325   65211 buildroot.go:174] setting up certificates
	I0318 21:58:28.549334   65211 provision.go:84] configureAuth start
	I0318 21:58:28.549343   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetMachineName
	I0318 21:58:28.549572   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:28.551881   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552183   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.552205   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.552399   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.554341   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554629   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.554656   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.554752   65211 provision.go:143] copyHostCerts
	I0318 21:58:28.554812   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:28.554825   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:28.554912   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:28.555020   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:28.555032   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:28.555062   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:28.555145   65211 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:28.555155   65211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:28.555192   65211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:28.555259   65211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.embed-certs-141758 san=[127.0.0.1 192.168.39.243 embed-certs-141758 localhost minikube]
	I0318 21:58:28.706111   65211 provision.go:177] copyRemoteCerts
	I0318 21:58:28.706158   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:28.706185   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.708537   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708795   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.708822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.708998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.709164   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.709335   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.709446   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:28.796199   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:28.827207   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:58:28.854273   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:28.880505   65211 provision.go:87] duration metric: took 331.161751ms to configureAuth
	I0318 21:58:28.880524   65211 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:28.880716   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:58:28.880801   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:28.883232   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:28.883583   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:28.883753   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:28.883926   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884087   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:28.884186   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:28.884339   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:28.884481   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:28.884496   65211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:29.164330   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:29.164357   65211 machine.go:97] duration metric: took 991.159236ms to provisionDockerMachine
	I0318 21:58:29.164370   65211 start.go:293] postStartSetup for "embed-certs-141758" (driver="kvm2")
	I0318 21:58:29.164381   65211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:29.164434   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.164734   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:29.164758   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.167400   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167696   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.167719   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.167867   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.168065   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.168235   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.168352   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.256141   65211 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:29.261086   65211 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:29.261104   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:29.261157   65211 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:29.261229   65211 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:29.261309   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:29.271174   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:29.297161   65211 start.go:296] duration metric: took 132.781067ms for postStartSetup
	I0318 21:58:29.297192   65211 fix.go:56] duration metric: took 21.456139061s for fixHost
	I0318 21:58:29.297208   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.299741   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300102   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.300127   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.300289   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.300480   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.300750   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.300864   65211 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:29.301028   65211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0318 21:58:29.301039   65211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:29.413842   65211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799109.363417589
	
	I0318 21:58:29.413869   65211 fix.go:216] guest clock: 1710799109.363417589
	I0318 21:58:29.413876   65211 fix.go:229] Guest: 2024-03-18 21:58:29.363417589 +0000 UTC Remote: 2024-03-18 21:58:29.297195181 +0000 UTC m=+297.765354372 (delta=66.222408ms)
	I0318 21:58:29.413892   65211 fix.go:200] guest clock delta is within tolerance: 66.222408ms
	I0318 21:58:29.413899   65211 start.go:83] releasing machines lock for "embed-certs-141758", held for 21.572869797s
	I0318 21:58:29.413932   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.414191   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:29.416929   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417293   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.417318   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.417500   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418019   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418159   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 21:58:29.418230   65211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:29.418275   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.418330   65211 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:29.418344   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 21:58:29.420728   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421022   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421053   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421076   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421228   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421413   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421464   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:29.421493   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:29.421593   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.421673   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 21:58:29.421749   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.421828   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 21:58:29.421960   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 21:58:29.422081   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 21:58:29.502548   65211 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:29.531994   65211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:29.681482   65211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:29.689671   65211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:29.689735   65211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:29.711660   65211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:29.711682   65211 start.go:494] detecting cgroup driver to use...
	I0318 21:58:29.711750   65211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:29.728159   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:29.742409   65211 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:29.742450   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:29.757587   65211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:29.772218   65211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:29.883164   65211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:30.046773   65211 docker.go:233] disabling docker service ...
	I0318 21:58:30.046845   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:30.065878   65211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:30.081551   65211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:30.223188   65211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:30.353535   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:30.370291   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:30.391728   65211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:58:30.391789   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.409204   65211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:30.409281   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.426464   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.439964   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.452097   65211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:30.464410   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.475990   65211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.495092   65211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:30.506831   65211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:30.517410   65211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:30.517463   65211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:30.532465   65211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:30.543958   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:30.679788   65211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:30.839388   65211 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:30.839466   65211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:30.844666   65211 start.go:562] Will wait 60s for crictl version
	I0318 21:58:30.844720   65211 ssh_runner.go:195] Run: which crictl
	I0318 21:58:30.848886   65211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:30.888598   65211 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:30.888686   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.921097   65211 ssh_runner.go:195] Run: crio --version
	I0318 21:58:30.954037   65211 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:58:30.955378   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetIP
	I0318 21:58:30.958352   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.958792   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 21:58:30.958822   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 21:58:30.959064   65211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:30.963556   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:30.977788   65211 kubeadm.go:877] updating cluster {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:30.977899   65211 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:58:30.977949   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:31.018843   65211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:58:31.018926   65211 ssh_runner.go:195] Run: which lz4
	I0318 21:58:31.023589   65211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:31.028416   65211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:31.028445   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:58:30.668558   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting to get IP...
	I0318 21:58:30.669483   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.669936   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.670023   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.669931   66350 retry.go:31] will retry after 222.544346ms: waiting for machine to come up
	I0318 21:58:30.894570   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:30.895113   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:30.895140   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:30.895068   66350 retry.go:31] will retry after 355.752794ms: waiting for machine to come up
	I0318 21:58:31.252797   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.253265   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.253293   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.253217   66350 retry.go:31] will retry after 473.104426ms: waiting for machine to come up
	I0318 21:58:31.727579   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:31.728129   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:31.728157   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:31.728079   66350 retry.go:31] will retry after 566.412205ms: waiting for machine to come up
	I0318 21:58:32.295552   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.296044   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.296072   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.296004   66350 retry.go:31] will retry after 573.484484ms: waiting for machine to come up
	I0318 21:58:32.870871   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:32.871287   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:32.871346   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:32.871277   66350 retry.go:31] will retry after 932.863596ms: waiting for machine to come up
	I0318 21:58:33.805377   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:33.805847   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:33.805895   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:33.805795   66350 retry.go:31] will retry after 1.069321569s: waiting for machine to come up
	I0318 21:58:34.877311   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:34.877827   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:34.877860   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:34.877773   66350 retry.go:31] will retry after 1.27837332s: waiting for machine to come up
	I0318 21:58:32.944637   65211 crio.go:462] duration metric: took 1.921083293s to copy over tarball
	I0318 21:58:32.944709   65211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:35.696230   65211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751490576s)
	I0318 21:58:35.696261   65211 crio.go:469] duration metric: took 2.751600779s to extract the tarball
	I0318 21:58:35.696271   65211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:35.739467   65211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:35.794398   65211 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:58:35.794427   65211 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:58:35.794436   65211 kubeadm.go:928] updating node { 192.168.39.243 8443 v1.28.4 crio true true} ...
	I0318 21:58:35.794559   65211 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-141758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:35.794625   65211 ssh_runner.go:195] Run: crio config
	I0318 21:58:35.844849   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:35.844877   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:35.844888   65211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:35.844923   65211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-141758 NodeName:embed-certs-141758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:58:35.845069   65211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-141758"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:35.845124   65211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:58:35.856885   65211 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:35.856950   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:35.867990   65211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 21:58:35.887057   65211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:35.909244   65211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 21:58:35.931267   65211 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:35.935793   65211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:35.950323   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:36.093377   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:36.112548   65211 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758 for IP: 192.168.39.243
	I0318 21:58:36.112575   65211 certs.go:194] generating shared ca certs ...
	I0318 21:58:36.112596   65211 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:36.112766   65211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:36.112813   65211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:36.112822   65211 certs.go:256] generating profile certs ...
	I0318 21:58:36.112943   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/client.key
	I0318 21:58:36.113043   65211 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key.d575a4ae
	I0318 21:58:36.113097   65211 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key
	I0318 21:58:36.113263   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:36.113307   65211 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:36.113322   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:36.113359   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:36.113396   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:36.113429   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:36.113536   65211 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:36.114412   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:36.147930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:36.177554   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:36.208374   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:36.243425   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 21:58:36.276720   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:58:36.317930   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:36.345717   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/embed-certs-141758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:58:36.371655   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:36.396998   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:36.422750   65211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:36.448117   65211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:36.466558   65211 ssh_runner.go:195] Run: openssl version
	I0318 21:58:36.472888   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:36.484389   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489534   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.489585   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:36.496045   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:36.507723   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:36.519030   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524214   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.524267   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:36.531109   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:36.543912   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:36.556130   65211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561330   65211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.561369   65211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:36.567883   65211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:36.158196   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:36.158633   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:36.158667   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:36.158581   66350 retry.go:31] will retry after 1.348066025s: waiting for machine to come up
	I0318 21:58:37.509248   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:37.509617   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:37.509637   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:37.509581   66350 retry.go:31] will retry after 2.080074922s: waiting for machine to come up
	I0318 21:58:39.591514   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:39.591973   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:39.592001   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:39.591934   66350 retry.go:31] will retry after 2.302421788s: waiting for machine to come up
	I0318 21:58:36.579819   65211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:36.824046   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:36.831273   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:36.838571   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:36.845621   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:36.852423   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:36.859433   65211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:36.866091   65211 kubeadm.go:391] StartCluster: {Name:embed-certs-141758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-141758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:36.866212   65211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:36.866263   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:36.912390   65211 cri.go:89] found id: ""
	I0318 21:58:36.912460   65211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:36.929896   65211 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:36.929923   65211 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:36.929931   65211 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:36.929985   65211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:36.947191   65211 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:36.948613   65211 kubeconfig.go:125] found "embed-certs-141758" server: "https://192.168.39.243:8443"
	I0318 21:58:36.951641   65211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:36.966095   65211 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.243
	I0318 21:58:36.966135   65211 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:36.966150   65211 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:36.966216   65211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:37.022620   65211 cri.go:89] found id: ""
	I0318 21:58:37.022680   65211 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:37.042338   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:37.054534   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:37.054552   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:37.054588   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:37.066099   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:37.066166   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:37.077340   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:37.088158   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:37.088214   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:37.099190   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.110081   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:37.110118   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:37.121852   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:37.133161   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:37.133215   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:37.144199   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:37.155593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.271593   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:37.921199   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.175721   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.264478   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:38.377591   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:38.377683   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:38.878031   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.377859   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:39.417546   65211 api_server.go:72] duration metric: took 1.039957218s to wait for apiserver process to appear ...
	I0318 21:58:39.417576   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:58:39.417599   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:39.418125   65211 api_server.go:269] stopped: https://192.168.39.243:8443/healthz: Get "https://192.168.39.243:8443/healthz": dial tcp 192.168.39.243:8443: connect: connection refused
	I0318 21:58:39.917663   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.450620   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.450656   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.450668   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.489722   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:58:42.489755   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:58:42.918487   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:42.924551   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:42.924584   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.418077   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.424938   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:58:43.424969   65211 api_server.go:103] status: https://192.168.39.243:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:58:43.918053   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 21:58:43.922905   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 21:58:43.931126   65211 api_server.go:141] control plane version: v1.28.4
	I0318 21:58:43.931151   65211 api_server.go:131] duration metric: took 4.513568499s to wait for apiserver health ...
	I0318 21:58:43.931159   65211 cni.go:84] Creating CNI manager for ""
	I0318 21:58:43.931173   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:43.932876   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:58:41.897573   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:41.898012   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:41.898035   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:41.897964   66350 retry.go:31] will retry after 2.645096928s: waiting for machine to come up
	I0318 21:58:44.544646   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:44.545116   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | unable to find current IP address of domain old-k8s-version-648232 in network mk-old-k8s-version-648232
	I0318 21:58:44.545153   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | I0318 21:58:44.545053   66350 retry.go:31] will retry after 3.010240256s: waiting for machine to come up
	I0318 21:58:43.934155   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:58:43.948750   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:58:43.978849   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:58:43.991046   65211 system_pods.go:59] 8 kube-system pods found
	I0318 21:58:43.991082   65211 system_pods.go:61] "coredns-5dd5756b68-r9pft" [add358cf-d544-4107-a05f-5e60542ea456] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:58:43.991089   65211 system_pods.go:61] "etcd-embed-certs-141758" [31274121-ec65-46b5-bcda-65698c28bd1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:58:43.991095   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [61e4c0db-7a20-4c93-83b3-de4738e82614] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:58:43.991100   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [c2ffe900-4e3a-4c21-ae8f-cd42475207c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:58:43.991105   65211 system_pods.go:61] "kube-proxy-klmnb" [45b0c762-4eaf-4e8a-b321-0d474f61086e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:58:43.991109   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [5aeed9aa-9d98-49c0-bf8a-3998738f6579] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:58:43.991114   65211 system_pods.go:61] "metrics-server-57f55c9bc5-vt7hj" [949e4c0f-6a76-4141-b30c-f27291873f14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:58:43.991123   65211 system_pods.go:61] "storage-provisioner" [0aca1af6-3221-4698-915b-cabb9da662bf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:58:43.991128   65211 system_pods.go:74] duration metric: took 12.25858ms to wait for pod list to return data ...
	I0318 21:58:43.991136   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:58:43.996109   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:58:43.996135   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 21:58:43.996146   65211 node_conditions.go:105] duration metric: took 5.004614ms to run NodePressure ...
	I0318 21:58:43.996163   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:44.227606   65211 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234823   65211 kubeadm.go:733] kubelet initialised
	I0318 21:58:44.234846   65211 kubeadm.go:734] duration metric: took 7.215375ms waiting for restarted kubelet to initialise ...
	I0318 21:58:44.234854   65211 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:58:44.241197   65211 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.248990   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249008   65211 pod_ready.go:81] duration metric: took 7.784519ms for pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.249016   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "coredns-5dd5756b68-r9pft" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.249022   65211 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.254792   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254820   65211 pod_ready.go:81] duration metric: took 5.788084ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.254833   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "etcd-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.254846   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.261248   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261272   65211 pod_ready.go:81] duration metric: took 6.415486ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.261282   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.261291   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.383016   65211 pod_ready.go:97] node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383056   65211 pod_ready.go:81] duration metric: took 121.750871ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	E0318 21:58:44.383069   65211 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-141758" hosting pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-141758" has status "Ready":"False"
	I0318 21:58:44.383078   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784241   65211 pod_ready.go:92] pod "kube-proxy-klmnb" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:44.784264   65211 pod_ready.go:81] duration metric: took 401.177044ms for pod "kube-proxy-klmnb" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:44.784272   65211 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:48.950018   65699 start.go:364] duration metric: took 4m12.246849763s to acquireMachinesLock for "no-preload-963041"
	I0318 21:58:48.950078   65699 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:58:48.950087   65699 fix.go:54] fixHost starting: 
	I0318 21:58:48.950522   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:58:48.950556   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:58:48.966094   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I0318 21:58:48.966492   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:58:48.966970   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:58:48.966994   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:58:48.967295   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:58:48.967443   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:58:48.967548   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:58:48.968800   65699 fix.go:112] recreateIfNeeded on no-preload-963041: state=Stopped err=<nil>
	I0318 21:58:48.968835   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	W0318 21:58:48.969105   65699 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:58:48.970900   65699 out.go:177] * Restarting existing kvm2 VM for "no-preload-963041" ...
	I0318 21:58:47.559274   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559793   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has current primary IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.559814   65622 main.go:141] libmachine: (old-k8s-version-648232) Found IP for machine: 192.168.61.111
	I0318 21:58:47.559828   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserving static IP address...
	I0318 21:58:47.560325   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.560359   65622 main.go:141] libmachine: (old-k8s-version-648232) Reserved static IP address: 192.168.61.111
	I0318 21:58:47.560385   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | skip adding static IP to network mk-old-k8s-version-648232 - found existing host DHCP lease matching {name: "old-k8s-version-648232", mac: "52:54:00:88:cb:42", ip: "192.168.61.111"}
	I0318 21:58:47.560401   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Getting to WaitForSSH function...
	I0318 21:58:47.560417   65622 main.go:141] libmachine: (old-k8s-version-648232) Waiting for SSH to be available...
	I0318 21:58:47.562852   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563285   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.563314   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.563494   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH client type: external
	I0318 21:58:47.563522   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa (-rw-------)
	I0318 21:58:47.563561   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:58:47.563576   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | About to run SSH command:
	I0318 21:58:47.563622   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | exit 0
	I0318 21:58:47.692948   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | SSH cmd err, output: <nil>: 
	I0318 21:58:47.693373   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetConfigRaw
	I0318 21:58:47.694034   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:47.696795   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697184   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.697213   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.697437   65622 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/config.json ...
	I0318 21:58:47.697637   65622 machine.go:94] provisionDockerMachine start ...
	I0318 21:58:47.697658   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:47.697846   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.700225   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700525   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.700549   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.700649   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.700816   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.700993   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.701112   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.701276   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.701440   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.701450   65622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:58:47.809658   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:58:47.809690   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.809920   65622 buildroot.go:166] provisioning hostname "old-k8s-version-648232"
	I0318 21:58:47.809945   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:47.810132   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.812510   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.812869   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.812896   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.813079   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.813266   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813414   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.813559   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.813726   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.813935   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.813954   65622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-648232 && echo "old-k8s-version-648232" | sudo tee /etc/hostname
	I0318 21:58:47.949030   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-648232
	
	I0318 21:58:47.949063   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:47.952028   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952387   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:47.952424   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:47.952586   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:47.952768   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.952972   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:47.953109   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:47.953280   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:47.953488   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:47.953514   65622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-648232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-648232/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-648232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:58:48.072416   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:58:48.072457   65622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:58:48.072484   65622 buildroot.go:174] setting up certificates
	I0318 21:58:48.072494   65622 provision.go:84] configureAuth start
	I0318 21:58:48.072506   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetMachineName
	I0318 21:58:48.072802   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.075880   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076202   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.076235   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.076407   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.078791   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079125   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.079155   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.079292   65622 provision.go:143] copyHostCerts
	I0318 21:58:48.079370   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:58:48.079385   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:58:48.079441   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:58:48.079552   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:58:48.079565   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:58:48.079595   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:58:48.079675   65622 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:58:48.079686   65622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:58:48.079719   65622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:58:48.079797   65622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-648232 san=[127.0.0.1 192.168.61.111 localhost minikube old-k8s-version-648232]
	I0318 21:58:48.236852   65622 provision.go:177] copyRemoteCerts
	I0318 21:58:48.236923   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:58:48.236952   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.239485   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.239807   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.239839   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.240022   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.240187   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.240338   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.240470   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.338739   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:58:48.367538   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 21:58:48.397586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:58:48.425384   65622 provision.go:87] duration metric: took 352.877274ms to configureAuth
	I0318 21:58:48.425415   65622 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:58:48.425624   65622 config.go:182] Loaded profile config "old-k8s-version-648232": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 21:58:48.425693   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.427989   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428345   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.428365   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.428593   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.428793   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.428968   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.429114   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.429269   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.429434   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.429455   65622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:58:48.706098   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:58:48.706131   65622 machine.go:97] duration metric: took 1.008474629s to provisionDockerMachine
	I0318 21:58:48.706148   65622 start.go:293] postStartSetup for "old-k8s-version-648232" (driver="kvm2")
	I0318 21:58:48.706165   65622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:58:48.706193   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.706546   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:58:48.706580   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.709104   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709434   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.709464   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.709589   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.709787   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.709969   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.710109   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.792915   65622 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:58:48.797845   65622 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:58:48.797864   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:58:48.797932   65622 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:58:48.798038   65622 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:58:48.798150   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:58:48.808487   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:48.838863   65622 start.go:296] duration metric: took 132.703395ms for postStartSetup
	I0318 21:58:48.838896   65622 fix.go:56] duration metric: took 19.424816589s for fixHost
	I0318 21:58:48.838927   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.841223   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841572   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.841603   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.841683   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.841876   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842015   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.842138   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.842295   65622 main.go:141] libmachine: Using SSH client type: native
	I0318 21:58:48.842469   65622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.111 22 <nil> <nil>}
	I0318 21:58:48.842483   65622 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:58:48.949868   65622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799128.925696756
	
	I0318 21:58:48.949893   65622 fix.go:216] guest clock: 1710799128.925696756
	I0318 21:58:48.949901   65622 fix.go:229] Guest: 2024-03-18 21:58:48.925696756 +0000 UTC Remote: 2024-03-18 21:58:48.838901995 +0000 UTC m=+258.909510680 (delta=86.794761ms)
	I0318 21:58:48.949925   65622 fix.go:200] guest clock delta is within tolerance: 86.794761ms
	I0318 21:58:48.949932   65622 start.go:83] releasing machines lock for "old-k8s-version-648232", held for 19.535879787s
	I0318 21:58:48.949963   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.950245   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:48.952656   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953000   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.953030   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.953184   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953664   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953845   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .DriverName
	I0318 21:58:48.953931   65622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:58:48.953973   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.954053   65622 ssh_runner.go:195] Run: cat /version.json
	I0318 21:58:48.954070   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHHostname
	I0318 21:58:48.956479   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956764   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956801   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.956828   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.956944   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957100   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957250   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957281   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:48.957302   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:48.957432   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:48.957451   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHPort
	I0318 21:58:48.957582   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHKeyPath
	I0318 21:58:48.957721   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetSSHUsername
	I0318 21:58:48.957858   65622 sshutil.go:53] new ssh client: &{IP:192.168.61.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/old-k8s-version-648232/id_rsa Username:docker}
	I0318 21:58:49.066050   65622 ssh_runner.go:195] Run: systemctl --version
	I0318 21:58:49.072126   65622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:58:49.220860   65622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:58:49.227821   65622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:58:49.227882   65622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:58:49.245262   65622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:58:49.245285   65622 start.go:494] detecting cgroup driver to use...
	I0318 21:58:49.245359   65622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:58:49.261736   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:58:49.278239   65622 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:58:49.278289   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:58:49.297240   65622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:58:49.312813   65622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:58:49.435983   65622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:58:49.584356   65622 docker.go:233] disabling docker service ...
	I0318 21:58:49.584432   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:58:49.603469   65622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:58:49.619602   65622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:58:49.775541   65622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:58:49.919861   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:58:49.940785   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:58:49.964296   65622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 21:58:49.964356   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.976612   65622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:58:49.977221   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:49.988978   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.000697   65622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:58:50.012348   65622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:58:50.023873   65622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:58:50.033574   65622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:58:50.033611   65622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:58:50.047262   65622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:58:50.058328   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:50.205960   65622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:58:50.356293   65622 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:58:50.356376   65622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:58:50.361732   65622 start.go:562] Will wait 60s for crictl version
	I0318 21:58:50.361796   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:50.366347   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:58:50.406298   65622 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:58:50.406398   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.440705   65622 ssh_runner.go:195] Run: crio --version
	I0318 21:58:50.473017   65622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 21:58:46.795337   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:49.295100   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.299437   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:48.972407   65699 main.go:141] libmachine: (no-preload-963041) Calling .Start
	I0318 21:58:48.972572   65699 main.go:141] libmachine: (no-preload-963041) Ensuring networks are active...
	I0318 21:58:48.973251   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network default is active
	I0318 21:58:48.973606   65699 main.go:141] libmachine: (no-preload-963041) Ensuring network mk-no-preload-963041 is active
	I0318 21:58:48.973992   65699 main.go:141] libmachine: (no-preload-963041) Getting domain xml...
	I0318 21:58:48.974629   65699 main.go:141] libmachine: (no-preload-963041) Creating domain...
	I0318 21:58:50.190010   65699 main.go:141] libmachine: (no-preload-963041) Waiting to get IP...
	I0318 21:58:50.190750   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.191241   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.191320   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.191220   66466 retry.go:31] will retry after 238.162453ms: waiting for machine to come up
	I0318 21:58:50.430778   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.431262   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.431292   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.431191   66466 retry.go:31] will retry after 318.744541ms: waiting for machine to come up
	I0318 21:58:50.751612   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:50.752051   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:50.752086   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:50.752007   66466 retry.go:31] will retry after 464.29047ms: waiting for machine to come up
	I0318 21:58:51.218462   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.219034   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.219062   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.218983   66466 retry.go:31] will retry after 476.466311ms: waiting for machine to come up
	I0318 21:58:50.474496   65622 main.go:141] libmachine: (old-k8s-version-648232) Calling .GetIP
	I0318 21:58:50.477908   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478353   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:cb:42", ip: ""} in network mk-old-k8s-version-648232: {Iface:virbr2 ExpiryTime:2024-03-18 22:58:42 +0000 UTC Type:0 Mac:52:54:00:88:cb:42 Iaid: IPaddr:192.168.61.111 Prefix:24 Hostname:old-k8s-version-648232 Clientid:01:52:54:00:88:cb:42}
	I0318 21:58:50.478389   65622 main.go:141] libmachine: (old-k8s-version-648232) DBG | domain old-k8s-version-648232 has defined IP address 192.168.61.111 and MAC address 52:54:00:88:cb:42 in network mk-old-k8s-version-648232
	I0318 21:58:50.478618   65622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 21:58:50.483617   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:50.499147   65622 kubeadm.go:877] updating cluster {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:58:50.499269   65622 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 21:58:50.499333   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:50.551649   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:50.551716   65622 ssh_runner.go:195] Run: which lz4
	I0318 21:58:50.556525   65622 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:58:50.561566   65622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:58:50.561594   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 21:58:52.646283   65622 crio.go:462] duration metric: took 2.089798336s to copy over tarball
	I0318 21:58:52.646359   65622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:58:53.792483   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:51.696634   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:51.697179   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:51.697208   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:51.697099   66466 retry.go:31] will retry after 520.896381ms: waiting for machine to come up
	I0318 21:58:52.219861   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:52.220480   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:52.220506   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:52.220414   66466 retry.go:31] will retry after 872.240898ms: waiting for machine to come up
	I0318 21:58:53.094123   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.094547   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.094580   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.094499   66466 retry.go:31] will retry after 757.325359ms: waiting for machine to come up
	I0318 21:58:53.852954   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:53.853422   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:53.853453   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:53.853358   66466 retry.go:31] will retry after 1.459327383s: waiting for machine to come up
	I0318 21:58:55.313969   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:55.314382   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:55.314413   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:55.314328   66466 retry.go:31] will retry after 1.373606235s: waiting for machine to come up
	I0318 21:58:55.995228   65622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.348837805s)
	I0318 21:58:55.995262   65622 crio.go:469] duration metric: took 3.348951107s to extract the tarball
	I0318 21:58:55.995271   65622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:58:56.043148   65622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:58:56.091295   65622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 21:58:56.091320   65622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:58:56.091409   65622 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.091418   65622 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.091431   65622 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.091421   65622 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.091448   65622 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.091471   65622 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.091506   65622 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.091512   65622 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 21:58:56.092923   65622 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.093028   65622 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.093048   65622 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.093052   65622 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.092924   65622 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:56.093136   65622 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.093143   65622 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 21:58:56.093250   65622 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.239200   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.242232   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.244160   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.248823   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.255548   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.264753   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.306940   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 21:58:56.359783   65622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 21:58:56.359825   65622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.359874   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413012   65622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 21:58:56.413051   65622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.413101   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.413420   65622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 21:58:56.413455   65622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.413490   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.442743   65622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 21:58:56.442787   65622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.442832   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.450680   65622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 21:58:56.450733   65622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.450798   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.462926   65622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 21:58:56.462963   65622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 21:58:56.462989   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 21:58:56.462992   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.463034   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 21:58:56.463090   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 21:58:56.463138   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 21:58:56.463145   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 21:58:56.463159   65622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 21:58:56.463183   65622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.463221   65622 ssh_runner.go:195] Run: which crictl
	I0318 21:58:56.592127   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 21:58:56.592159   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 21:58:56.593931   65622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 21:58:56.593968   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 21:58:56.593973   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 21:58:56.594059   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 21:58:56.594143   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 21:58:56.660138   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 21:58:56.660360   65622 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 21:58:56.983635   65622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:58:57.142451   65622 cache_images.go:92] duration metric: took 1.051113719s to LoadCachedImages
	W0318 21:58:57.142554   65622 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0318 21:58:57.142575   65622 kubeadm.go:928] updating node { 192.168.61.111 8443 v1.20.0 crio true true} ...
	I0318 21:58:57.142723   65622 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-648232 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:58:57.142797   65622 ssh_runner.go:195] Run: crio config
	I0318 21:58:57.195416   65622 cni.go:84] Creating CNI manager for ""
	I0318 21:58:57.195439   65622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:58:57.195451   65622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:58:57.195468   65622 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.111 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-648232 NodeName:old-k8s-version-648232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 21:58:57.195585   65622 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-648232"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:58:57.195650   65622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 21:58:57.208700   65622 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:58:57.208757   65622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:58:57.220276   65622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 21:58:57.239513   65622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:58:57.258540   65622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 21:58:57.277932   65622 ssh_runner.go:195] Run: grep 192.168.61.111	control-plane.minikube.internal$ /etc/hosts
	I0318 21:58:57.282433   65622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:58:57.298049   65622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:58:57.427745   65622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:58:57.459845   65622 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232 for IP: 192.168.61.111
	I0318 21:58:57.459867   65622 certs.go:194] generating shared ca certs ...
	I0318 21:58:57.459904   65622 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:57.460072   65622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:58:57.460123   65622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:58:57.460138   65622 certs.go:256] generating profile certs ...
	I0318 21:58:57.460254   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/client.key
	I0318 21:58:57.460328   65622 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key.a3f2b5e4
	I0318 21:58:57.460376   65622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key
	I0318 21:58:57.460521   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:58:57.460560   65622 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:58:57.460573   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:58:57.460602   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:58:57.460637   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:58:57.460668   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:58:57.460733   65622 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:58:57.461586   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:58:57.515591   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:58:57.541750   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:58:57.575282   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:58:57.617495   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 21:58:57.657111   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:58:57.705104   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:58:57.737956   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/old-k8s-version-648232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 21:58:57.766218   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:58:57.793952   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:58:57.824458   65622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:58:57.852188   65622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:58:57.872773   65622 ssh_runner.go:195] Run: openssl version
	I0318 21:58:57.880817   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:58:57.896644   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902576   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.902636   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:58:57.908893   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:58:57.922730   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:58:57.936508   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941802   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.941839   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:58:57.948093   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:58:57.961852   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:58:57.974049   65622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978886   65622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.978929   65622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:58:57.984848   65622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:58:57.997033   65622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:58:58.002171   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:58:58.008665   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:58:58.014908   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:58:58.021663   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:58:58.029605   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:58:58.038208   65622 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:58:58.044738   65622 kubeadm.go:391] StartCluster: {Name:old-k8s-version-648232 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-648232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.111 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:58:58.044828   65622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:58:58.044881   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.095866   65622 cri.go:89] found id: ""
	I0318 21:58:58.096010   65622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:58:58.108723   65622 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:58:58.108745   65622 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:58:58.108751   65622 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:58:58.108797   65622 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:58:58.120754   65622 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:58:58.121803   65622 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-648232" does not appear in /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:58:58.122532   65622 kubeconfig.go:62] /home/jenkins/minikube-integration/18421-5321/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-648232" cluster setting kubeconfig missing "old-k8s-version-648232" context setting]
	I0318 21:58:58.123561   65622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:58:58.125229   65622 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:58:58.136331   65622 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.111
	I0318 21:58:58.136360   65622 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:58:58.136372   65622 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:58:58.136416   65622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:58:58.179370   65622 cri.go:89] found id: ""
	I0318 21:58:58.179465   65622 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:58:58.197860   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:58:58.208772   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:58:58.208796   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 21:58:58.208837   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:58:58.219033   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:58:58.219090   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:58:58.230223   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:58:58.240823   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:58:58.240886   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:58:58.251629   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.262525   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:58:58.262573   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:58:58.274831   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:58:58.286644   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:58:58.286690   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:58:58.298127   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:58:58.309664   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:58.456818   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.106974   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.334718   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.434113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:58:59.534368   65622 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:58:59.534461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:58:57.057776   65211 pod_ready.go:102] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:57.791727   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 21:58:57.791754   65211 pod_ready.go:81] duration metric: took 13.007474768s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:57.791769   65211 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	I0318 21:58:59.800074   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:58:56.689643   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:56.690039   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:56.690064   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:56.690020   66466 retry.go:31] will retry after 1.905319343s: waiting for machine to come up
	I0318 21:58:58.597961   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:58:58.598470   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:58:58.598501   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:58:58.598420   66466 retry.go:31] will retry after 2.720364267s: waiting for machine to come up
	I0318 21:59:01.321901   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:01.322290   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:01.322312   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:01.322254   66466 retry.go:31] will retry after 2.73029124s: waiting for machine to come up
	I0318 21:59:00.035251   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:00.534822   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.034721   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:01.535447   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.034809   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.535193   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.034597   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:03.534670   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.035493   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:04.535148   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:02.299143   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.800475   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:04.054294   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:04.054715   65699 main.go:141] libmachine: (no-preload-963041) DBG | unable to find current IP address of domain no-preload-963041 in network mk-no-preload-963041
	I0318 21:59:04.054752   65699 main.go:141] libmachine: (no-preload-963041) DBG | I0318 21:59:04.054671   66466 retry.go:31] will retry after 3.148777081s: waiting for machine to come up
	I0318 21:59:08.706453   65170 start.go:364] duration metric: took 55.86344587s to acquireMachinesLock for "default-k8s-diff-port-660775"
	I0318 21:59:08.706504   65170 start.go:96] Skipping create...Using existing machine configuration
	I0318 21:59:08.706515   65170 fix.go:54] fixHost starting: 
	I0318 21:59:08.706934   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:08.706970   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:08.723564   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0318 21:59:08.723935   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:08.724359   65170 main.go:141] libmachine: Using API Version  1
	I0318 21:59:08.724381   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:08.724671   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:08.724874   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:08.725045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 21:59:08.726635   65170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-660775: state=Stopped err=<nil>
	I0318 21:59:08.726656   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	W0318 21:59:08.726813   65170 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 21:59:08.728839   65170 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-660775" ...
	I0318 21:59:05.035054   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:05.535108   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.035211   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:06.535398   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.035017   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:07.534769   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.035221   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.534593   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.035328   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:09.534533   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:08.730181   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Start
	I0318 21:59:08.730374   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring networks are active...
	I0318 21:59:08.731140   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network default is active
	I0318 21:59:08.731488   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Ensuring network mk-default-k8s-diff-port-660775 is active
	I0318 21:59:08.731850   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Getting domain xml...
	I0318 21:59:08.732544   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Creating domain...
	I0318 21:59:10.014924   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting to get IP...
	I0318 21:59:10.015822   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016215   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.016299   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.016206   66608 retry.go:31] will retry after 301.369371ms: waiting for machine to come up
	I0318 21:59:07.205807   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206239   65699 main.go:141] libmachine: (no-preload-963041) Found IP for machine: 192.168.72.84
	I0318 21:59:07.206266   65699 main.go:141] libmachine: (no-preload-963041) Reserving static IP address...
	I0318 21:59:07.206281   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has current primary IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.206636   65699 main.go:141] libmachine: (no-preload-963041) Reserved static IP address: 192.168.72.84
	I0318 21:59:07.206659   65699 main.go:141] libmachine: (no-preload-963041) Waiting for SSH to be available...
	I0318 21:59:07.206686   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.206711   65699 main.go:141] libmachine: (no-preload-963041) DBG | skip adding static IP to network mk-no-preload-963041 - found existing host DHCP lease matching {name: "no-preload-963041", mac: "52:54:00:b2:30:3e", ip: "192.168.72.84"}
	I0318 21:59:07.206728   65699 main.go:141] libmachine: (no-preload-963041) DBG | Getting to WaitForSSH function...
	I0318 21:59:07.208790   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209157   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.209202   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.209306   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH client type: external
	I0318 21:59:07.209331   65699 main.go:141] libmachine: (no-preload-963041) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa (-rw-------)
	I0318 21:59:07.209367   65699 main.go:141] libmachine: (no-preload-963041) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:07.209381   65699 main.go:141] libmachine: (no-preload-963041) DBG | About to run SSH command:
	I0318 21:59:07.209395   65699 main.go:141] libmachine: (no-preload-963041) DBG | exit 0
	I0318 21:59:07.337357   65699 main.go:141] libmachine: (no-preload-963041) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:07.337688   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetConfigRaw
	I0318 21:59:07.338258   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.340609   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.340957   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.340996   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.341213   65699 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/config.json ...
	I0318 21:59:07.341396   65699 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:07.341462   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:07.341668   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.343956   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344275   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.344311   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.344395   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.344580   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344756   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.344891   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.345086   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.345264   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.345276   65699 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:07.457491   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:07.457543   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457778   65699 buildroot.go:166] provisioning hostname "no-preload-963041"
	I0318 21:59:07.457802   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.457975   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.460729   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461120   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.461145   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.461286   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.461480   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461643   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.461797   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.461980   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.462179   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.462193   65699 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-963041 && echo "no-preload-963041" | sudo tee /etc/hostname
	I0318 21:59:07.592194   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-963041
	
	I0318 21:59:07.592219   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.594794   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595141   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.595177   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.595305   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.595484   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595673   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.595836   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.595987   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:07.596144   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:07.596160   65699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-963041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-963041/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-963041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:07.719593   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:07.719622   65699 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:07.719655   65699 buildroot.go:174] setting up certificates
	I0318 21:59:07.719667   65699 provision.go:84] configureAuth start
	I0318 21:59:07.719681   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetMachineName
	I0318 21:59:07.719928   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:07.722544   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.722907   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.722935   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.723095   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.725108   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725391   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.725420   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.725522   65699 provision.go:143] copyHostCerts
	I0318 21:59:07.725582   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:07.725595   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:07.725665   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:07.725780   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:07.725792   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:07.725817   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:07.725874   65699 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:07.725881   65699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:07.725898   65699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:07.725945   65699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.no-preload-963041 san=[127.0.0.1 192.168.72.84 localhost minikube no-preload-963041]
	I0318 21:59:07.893632   65699 provision.go:177] copyRemoteCerts
	I0318 21:59:07.893685   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:07.893711   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:07.896227   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896501   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:07.896527   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:07.896692   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:07.896859   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:07.897035   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:07.897205   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:07.983501   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:08.014432   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 21:59:08.043755   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 21:59:08.074388   65699 provision.go:87] duration metric: took 354.707214ms to configureAuth
	I0318 21:59:08.074413   65699 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:08.074571   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:08.074638   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.077314   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077658   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.077690   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.077837   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.077996   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078150   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.078289   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.078435   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.078582   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.078596   65699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:08.446711   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:08.446745   65699 machine.go:97] duration metric: took 1.105332987s to provisionDockerMachine
	I0318 21:59:08.446757   65699 start.go:293] postStartSetup for "no-preload-963041" (driver="kvm2")
	I0318 21:59:08.446772   65699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:08.446787   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.447090   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:08.447118   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.449551   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.449917   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.449955   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.450117   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.450308   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.450471   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.450611   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.542283   65699 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:08.547389   65699 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:08.547423   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:08.547501   65699 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:08.547606   65699 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:08.547732   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:08.558721   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:08.586136   65699 start.go:296] duration metric: took 139.367706ms for postStartSetup
	I0318 21:59:08.586177   65699 fix.go:56] duration metric: took 19.636089577s for fixHost
	I0318 21:59:08.586201   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.588809   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589192   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.589219   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.589435   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.589604   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589731   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.589838   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.589972   65699 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:08.590182   65699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I0318 21:59:08.590197   65699 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:08.706260   65699 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799148.650279332
	
	I0318 21:59:08.706283   65699 fix.go:216] guest clock: 1710799148.650279332
	I0318 21:59:08.706293   65699 fix.go:229] Guest: 2024-03-18 21:59:08.650279332 +0000 UTC Remote: 2024-03-18 21:59:08.586181408 +0000 UTC m=+272.029432082 (delta=64.097924ms)
	I0318 21:59:08.706337   65699 fix.go:200] guest clock delta is within tolerance: 64.097924ms
	I0318 21:59:08.706350   65699 start.go:83] releasing machines lock for "no-preload-963041", held for 19.756290817s
	I0318 21:59:08.706384   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.706707   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:08.709113   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709389   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.709417   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.709561   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710009   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710155   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:08.710229   65699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:08.710278   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.710330   65699 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:08.710349   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:08.713131   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713154   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713464   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713492   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713521   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:08.713536   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:08.713632   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713739   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:08.713824   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713987   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:08.713988   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714117   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.714177   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:08.714337   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:08.827151   65699 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:08.833847   65699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:08.985638   65699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:08.992294   65699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:08.992372   65699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:09.009419   65699 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:09.009444   65699 start.go:494] detecting cgroup driver to use...
	I0318 21:59:09.009509   65699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:09.031942   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:09.051842   65699 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:09.051901   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:09.068136   65699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:09.084445   65699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:09.234323   65699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:09.402144   65699 docker.go:233] disabling docker service ...
	I0318 21:59:09.402210   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:09.419960   65699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:09.434836   65699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:09.572242   65699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:09.718817   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:09.734607   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:09.756470   65699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:09.756533   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.768595   65699 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:09.768685   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.780726   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.800700   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.817396   65699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:09.829896   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.842211   65699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.867273   65699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:09.880909   65699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:09.893254   65699 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:09.893297   65699 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:09.910897   65699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:09.922400   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:10.065248   65699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:10.223498   65699 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:10.223577   65699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:10.230686   65699 start.go:562] Will wait 60s for crictl version
	I0318 21:59:10.230752   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.235527   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:10.278655   65699 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:10.278756   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.310992   65699 ssh_runner.go:195] Run: crio --version
	I0318 21:59:10.344925   65699 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 21:59:07.298973   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:09.799803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:10.346255   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetIP
	I0318 21:59:10.349081   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349418   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:10.349437   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:10.349657   65699 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:10.354793   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:10.369744   65699 kubeadm.go:877] updating cluster {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:10.369893   65699 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 21:59:10.369951   65699 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:10.409975   65699 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 21:59:10.410001   65699 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 21:59:10.410062   65699 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.410074   65699 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.410086   65699 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.410122   65699 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.410148   65699 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.410166   65699 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 21:59:10.410213   65699 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.410223   65699 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.411690   65699 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:10.411689   65699 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.411695   65699 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 21:59:10.411730   65699 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.411747   65699 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.411764   65699 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.411793   65699 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.553195   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 21:59:10.553249   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.555774   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.559123   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.562266   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.571390   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.592690   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.702213   65699 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 21:59:10.702265   65699 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.702314   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857028   65699 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 21:59:10.857072   65699 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.857087   65699 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 21:59:10.857117   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857146   65699 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.857154   65699 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 21:59:10.857180   65699 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.857197   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857214   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857211   65699 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 21:59:10.857250   65699 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.857254   65699 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 21:59:10.857264   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 21:59:10.857275   65699 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.857282   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.857305   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:10.872164   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 21:59:10.872195   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 21:59:10.872268   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 21:59:10.927043   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.927147   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 21:59:10.927095   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 21:59:10.927219   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:10.972625   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 21:59:10.972740   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:11.016239   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 21:59:11.016291   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.016356   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:11.016380   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:11.047703   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 21:59:11.047732   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047784   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 21:59:11.047849   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.047952   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:11.069007   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 21:59:11.069064   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 21:59:11.069095   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 21:59:11.069126   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 21:59:11.069139   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:10.035384   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.534785   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.034607   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:11.535142   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.035259   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:12.535494   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.034673   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:13.535452   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.034630   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:14.535058   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:10.319858   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320279   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.320310   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.320224   66608 retry.go:31] will retry after 253.332307ms: waiting for machine to come up
	I0318 21:59:10.575748   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576242   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:10.576271   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:10.576194   66608 retry.go:31] will retry after 484.439329ms: waiting for machine to come up
	I0318 21:59:11.061837   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062291   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.062316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.062247   66608 retry.go:31] will retry after 520.757249ms: waiting for machine to come up
	I0318 21:59:11.585112   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585541   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:11.585571   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:11.585485   66608 retry.go:31] will retry after 482.335377ms: waiting for machine to come up
	I0318 21:59:12.068813   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069420   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:12.069456   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:12.069374   66608 retry.go:31] will retry after 936.563875ms: waiting for machine to come up
	I0318 21:59:13.007582   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.007986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.008012   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.007945   66608 retry.go:31] will retry after 864.468016ms: waiting for machine to come up
	I0318 21:59:13.874400   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874910   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:13.874942   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:13.874875   66608 retry.go:31] will retry after 1.239808671s: waiting for machine to come up
	I0318 21:59:15.116440   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116834   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:15.116855   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:15.116784   66608 retry.go:31] will retry after 1.208141339s: waiting for machine to come up
	I0318 21:59:11.804059   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:14.301199   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:16.301517   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:11.928081   65699 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.330891   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.28291236s)
	I0318 21:59:14.330933   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330948   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.261785854s)
	I0318 21:59:14.330971   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 21:59:14.330974   65699 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.402863992s)
	I0318 21:59:14.330979   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.283167958s)
	I0318 21:59:14.330996   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 21:59:14.331011   65699 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 21:59:14.331019   65699 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331043   65699 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:14.331064   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 21:59:14.331086   65699 ssh_runner.go:195] Run: which crictl
	I0318 21:59:14.336430   65699 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:15.034609   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:15.534895   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.034956   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.535474   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:17.534736   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.035297   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:18.534669   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.035540   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:19.534617   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:16.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:16.327415   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:16.327350   66608 retry.go:31] will retry after 2.24875206s: waiting for machine to come up
	I0318 21:59:18.578068   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578644   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:18.578677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:18.578589   66608 retry.go:31] will retry after 2.267791851s: waiting for machine to come up
	I0318 21:59:18.800406   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:20.800524   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:18.591731   65699 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.255273393s)
	I0318 21:59:18.591789   65699 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 21:59:18.591897   65699 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:18.591937   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.260848845s)
	I0318 21:59:18.591958   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 21:59:18.591986   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:18.592046   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 21:59:19.859577   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.267508443s)
	I0318 21:59:19.859608   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 21:59:19.859637   65699 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:19.859641   65699 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.267714811s)
	I0318 21:59:19.859674   65699 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 21:59:19.859685   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 21:59:20.035133   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.534922   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.035083   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:21.534538   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.035505   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:22.535008   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.035123   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:23.535181   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.034939   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:24.534985   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:20.847586   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848099   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:20.848135   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:20.848048   66608 retry.go:31] will retry after 2.918466892s: waiting for machine to come up
	I0318 21:59:23.768491   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.768999   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | unable to find current IP address of domain default-k8s-diff-port-660775 in network mk-default-k8s-diff-port-660775
	I0318 21:59:23.769030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | I0318 21:59:23.768962   66608 retry.go:31] will retry after 4.373256501s: waiting for machine to come up
	I0318 21:59:22.800765   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:24.801392   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:21.944666   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.084944906s)
	I0318 21:59:21.944700   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 21:59:21.944720   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:21.944766   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 21:59:24.714752   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.769964684s)
	I0318 21:59:24.714793   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 21:59:24.714827   65699 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:24.714884   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 21:59:25.035324   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:25.534635   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.034965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:26.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.035448   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:27.534690   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.034991   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.535057   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.034585   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:29.535220   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:28.146019   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146507   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Found IP for machine: 192.168.50.150
	I0318 21:59:28.146533   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserving static IP address...
	I0318 21:59:28.146549   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has current primary IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.146939   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.146966   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Reserved static IP address: 192.168.50.150
	I0318 21:59:28.146986   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | skip adding static IP to network mk-default-k8s-diff-port-660775 - found existing host DHCP lease matching {name: "default-k8s-diff-port-660775", mac: "52:54:00:80:9c:26", ip: "192.168.50.150"}
	I0318 21:59:28.147006   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Getting to WaitForSSH function...
	I0318 21:59:28.147030   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Waiting for SSH to be available...
	I0318 21:59:28.149408   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149771   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.149799   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.149929   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH client type: external
	I0318 21:59:28.149978   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Using SSH private key: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa (-rw-------)
	I0318 21:59:28.150020   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 21:59:28.150039   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | About to run SSH command:
	I0318 21:59:28.150050   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | exit 0
	I0318 21:59:28.273437   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | SSH cmd err, output: <nil>: 
	I0318 21:59:28.273768   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetConfigRaw
	I0318 21:59:28.274402   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.277330   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.277757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.277997   65170 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/config.json ...
	I0318 21:59:28.278217   65170 machine.go:94] provisionDockerMachine start ...
	I0318 21:59:28.278240   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:28.278435   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.280754   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281149   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.281178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.281318   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.281495   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281646   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.281796   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.281955   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.282163   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.282185   65170 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 21:59:28.390614   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 21:59:28.390642   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.390896   65170 buildroot.go:166] provisioning hostname "default-k8s-diff-port-660775"
	I0318 21:59:28.390923   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.391095   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.394421   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.394838   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.394876   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.395178   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.395410   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.395775   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.395953   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.396145   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.396160   65170 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-660775 && echo "default-k8s-diff-port-660775" | sudo tee /etc/hostname
	I0318 21:59:28.522303   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-660775
	
	I0318 21:59:28.522347   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.525224   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.525667   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.525789   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.525961   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526122   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.526267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.526471   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.526651   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.526676   65170 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-660775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-660775/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-660775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 21:59:28.641488   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 21:59:28.641521   65170 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18421-5321/.minikube CaCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18421-5321/.minikube}
	I0318 21:59:28.641547   65170 buildroot.go:174] setting up certificates
	I0318 21:59:28.641555   65170 provision.go:84] configureAuth start
	I0318 21:59:28.641564   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetMachineName
	I0318 21:59:28.641871   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:28.644934   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645267   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.645301   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.645425   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.647753   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648089   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.648119   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.648360   65170 provision.go:143] copyHostCerts
	I0318 21:59:28.648423   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem, removing ...
	I0318 21:59:28.648435   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem
	I0318 21:59:28.648507   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/ca.pem (1078 bytes)
	I0318 21:59:28.648620   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem, removing ...
	I0318 21:59:28.648631   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem
	I0318 21:59:28.648660   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/cert.pem (1123 bytes)
	I0318 21:59:28.648731   65170 exec_runner.go:144] found /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem, removing ...
	I0318 21:59:28.648740   65170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem
	I0318 21:59:28.648769   65170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18421-5321/.minikube/key.pem (1679 bytes)
	I0318 21:59:28.648829   65170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-660775 san=[127.0.0.1 192.168.50.150 default-k8s-diff-port-660775 localhost minikube]
	I0318 21:59:28.697191   65170 provision.go:177] copyRemoteCerts
	I0318 21:59:28.697253   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 21:59:28.697274   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.699919   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700237   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.700269   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.700477   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.700694   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.700882   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.701060   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:28.793840   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 21:59:28.829285   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 21:59:28.857628   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 21:59:28.886344   65170 provision.go:87] duration metric: took 244.778215ms to configureAuth
	I0318 21:59:28.886366   65170 buildroot.go:189] setting minikube options for container-runtime
	I0318 21:59:28.886527   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:59:28.886593   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:28.889885   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890321   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:28.890351   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:28.890534   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:28.890721   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.890879   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:28.891013   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:28.891190   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:28.891366   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:28.891399   65170 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 21:59:29.189002   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 21:59:29.189033   65170 machine.go:97] duration metric: took 910.801375ms to provisionDockerMachine
	I0318 21:59:29.189046   65170 start.go:293] postStartSetup for "default-k8s-diff-port-660775" (driver="kvm2")
	I0318 21:59:29.189058   65170 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 21:59:29.189083   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.189409   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 21:59:29.189438   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.192164   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.192512   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.192677   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.192866   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.193045   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.193190   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.277850   65170 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 21:59:29.282886   65170 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 21:59:29.282909   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/addons for local assets ...
	I0318 21:59:29.282975   65170 filesync.go:126] Scanning /home/jenkins/minikube-integration/18421-5321/.minikube/files for local assets ...
	I0318 21:59:29.283065   65170 filesync.go:149] local asset: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem -> 125682.pem in /etc/ssl/certs
	I0318 21:59:29.283172   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 21:59:29.296052   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:29.323906   65170 start.go:296] duration metric: took 134.847993ms for postStartSetup
	I0318 21:59:29.323945   65170 fix.go:56] duration metric: took 20.61742941s for fixHost
	I0318 21:59:29.323969   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.326616   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.326920   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.326950   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.327063   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.327300   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327472   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.327622   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.327853   65170 main.go:141] libmachine: Using SSH client type: native
	I0318 21:59:29.328058   65170 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0318 21:59:29.328070   65170 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 21:59:29.430348   65170 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710799169.377980776
	
	I0318 21:59:29.430377   65170 fix.go:216] guest clock: 1710799169.377980776
	I0318 21:59:29.430386   65170 fix.go:229] Guest: 2024-03-18 21:59:29.377980776 +0000 UTC Remote: 2024-03-18 21:59:29.323950953 +0000 UTC m=+359.071824665 (delta=54.029823ms)
	I0318 21:59:29.430411   65170 fix.go:200] guest clock delta is within tolerance: 54.029823ms
	I0318 21:59:29.430420   65170 start.go:83] releasing machines lock for "default-k8s-diff-port-660775", held for 20.723939352s
	I0318 21:59:29.430450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.430727   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:29.433339   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433686   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.433713   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.433865   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434308   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434531   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 21:59:29.434632   65170 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 21:59:29.434682   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.434783   65170 ssh_runner.go:195] Run: cat /version.json
	I0318 21:59:29.434811   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 21:59:29.437380   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437479   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437731   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437760   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.437829   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:29.437880   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:29.438033   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438170   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 21:59:29.438244   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438332   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 21:59:29.438393   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438484   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 21:59:29.438603   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.438694   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 21:59:29.540670   65170 ssh_runner.go:195] Run: systemctl --version
	I0318 21:59:29.547318   65170 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 21:59:29.704221   65170 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 21:59:29.710762   65170 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 21:59:29.710832   65170 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 21:59:29.727820   65170 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 21:59:29.727838   65170 start.go:494] detecting cgroup driver to use...
	I0318 21:59:29.727905   65170 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 21:59:29.745750   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 21:59:29.760984   65170 docker.go:217] disabling cri-docker service (if available) ...
	I0318 21:59:29.761024   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 21:59:29.776639   65170 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 21:59:29.791749   65170 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 21:59:29.914380   65170 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 21:59:30.096200   65170 docker.go:233] disabling docker service ...
	I0318 21:59:30.096281   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 21:59:30.112512   65170 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 21:59:30.126090   65170 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 21:59:30.258617   65170 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 21:59:30.397700   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 21:59:30.420478   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 21:59:30.443197   65170 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 21:59:30.443282   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.455577   65170 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 21:59:30.455630   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.467898   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.480041   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.492501   65170 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 21:59:30.505178   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.517657   65170 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.537376   65170 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 21:59:30.554749   65170 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 21:59:30.570281   65170 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 21:59:30.570352   65170 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 21:59:30.587991   65170 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 21:59:30.600354   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:30.744678   65170 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 21:59:30.902192   65170 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 21:59:30.902279   65170 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 21:59:30.907869   65170 start.go:562] Will wait 60s for crictl version
	I0318 21:59:30.907937   65170 ssh_runner.go:195] Run: which crictl
	I0318 21:59:30.913588   65170 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 21:59:30.957344   65170 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 21:59:30.957431   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:30.991141   65170 ssh_runner.go:195] Run: crio --version
	I0318 21:59:31.024452   65170 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 21:59:27.301221   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:29.799576   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:26.781379   65699 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.066468133s)
	I0318 21:59:26.781415   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 21:59:26.781445   65699 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:26.781493   65699 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 21:59:27.747707   65699 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18421-5321/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 21:59:27.747764   65699 cache_images.go:123] Successfully loaded all cached images
	I0318 21:59:27.747769   65699 cache_images.go:92] duration metric: took 17.337757279s to LoadCachedImages
	I0318 21:59:27.747781   65699 kubeadm.go:928] updating node { 192.168.72.84 8443 v1.29.0-rc.2 crio true true} ...
	I0318 21:59:27.747907   65699 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-963041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:27.747986   65699 ssh_runner.go:195] Run: crio config
	I0318 21:59:27.810020   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:27.810048   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:27.810060   65699 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:27.810078   65699 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-963041 NodeName:no-preload-963041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:27.810242   65699 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-963041"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:27.810327   65699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 21:59:27.823120   65699 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:27.823172   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:27.834742   65699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 21:59:27.854365   65699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 21:59:27.872873   65699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 21:59:27.891245   65699 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:27.895305   65699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:27.907928   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:28.044997   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:28.064471   65699 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041 for IP: 192.168.72.84
	I0318 21:59:28.064489   65699 certs.go:194] generating shared ca certs ...
	I0318 21:59:28.064503   65699 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:28.064668   65699 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:28.064733   65699 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:28.064747   65699 certs.go:256] generating profile certs ...
	I0318 21:59:28.064847   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/client.key
	I0318 21:59:28.064927   65699 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key.53f57e82
	I0318 21:59:28.064975   65699 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key
	I0318 21:59:28.065090   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:28.065140   65699 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:28.065154   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:28.065190   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:28.065218   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:28.065244   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:28.065292   65699 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:28.066189   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:28.108239   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:28.147385   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:28.191255   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:28.231079   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 21:59:28.269730   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 21:59:28.302326   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:28.331762   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/no-preload-963041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:28.359487   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:28.390196   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:28.422323   65699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:28.452212   65699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:28.476910   65699 ssh_runner.go:195] Run: openssl version
	I0318 21:59:28.483480   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:28.495230   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500728   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.500771   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:28.507487   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:28.520368   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:28.533700   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540767   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.540817   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:28.549380   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:28.566307   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:28.582377   65699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589139   65699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.589192   65699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:28.597396   65699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:28.610189   65699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:28.616488   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:28.625547   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:28.634680   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:28.643077   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:28.652470   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:28.660641   65699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:28.669216   65699 kubeadm.go:391] StartCluster: {Name:no-preload-963041 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-963041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:28.669342   65699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:28.669444   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.719357   65699 cri.go:89] found id: ""
	I0318 21:59:28.719427   65699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:28.733158   65699 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:28.733179   65699 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:28.733186   65699 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:28.733234   65699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:28.744804   65699 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:28.745805   65699 kubeconfig.go:125] found "no-preload-963041" server: "https://192.168.72.84:8443"
	I0318 21:59:28.747888   65699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:28.757871   65699 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.84
	I0318 21:59:28.757896   65699 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:28.757918   65699 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:28.757964   65699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:28.805988   65699 cri.go:89] found id: ""
	I0318 21:59:28.806057   65699 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:28.829257   65699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:28.841515   65699 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:28.841543   65699 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:28.841594   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 21:59:28.853433   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:28.853499   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:28.864593   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 21:59:28.875236   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:28.875285   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:28.887756   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.898219   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:28.898271   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:28.909308   65699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 21:59:28.919480   65699 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:28.919540   65699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:28.930305   65699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:28.941125   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:29.056129   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.261585   65699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.205423679s)
	I0318 21:59:30.261614   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.498583   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.589160   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:30.713046   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:30.713150   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.214160   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.034539   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:30.535237   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.034842   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.534620   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.034614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:32.534583   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.035348   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:33.534614   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.034683   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:34.534528   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.025614   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetIP
	I0318 21:59:31.028381   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028758   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 21:59:31.028783   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 21:59:31.028960   65170 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 21:59:31.033836   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:31.048652   65170 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 21:59:31.048798   65170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 21:59:31.048853   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:31.089246   65170 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 21:59:31.089322   65170 ssh_runner.go:195] Run: which lz4
	I0318 21:59:31.094026   65170 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 21:59:31.098900   65170 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 21:59:31.098929   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 21:59:33.166556   65170 crio.go:462] duration metric: took 2.072562246s to copy over tarball
	I0318 21:59:33.166639   65170 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 21:59:31.810567   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:34.301018   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:36.346463   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:31.714009   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:31.762157   65699 api_server.go:72] duration metric: took 1.049110677s to wait for apiserver process to appear ...
	I0318 21:59:31.762188   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:31.762210   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:31.762737   65699 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	I0318 21:59:32.263205   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.738750   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.738785   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.738802   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.804061   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.804102   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:34.804116   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:34.842097   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:34.842144   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:35.262351   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.267395   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.267439   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:35.763016   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:35.775072   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:35.775109   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.262338   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:36.267165   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:36.267207   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:36.762879   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.074225   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:37.074263   65699 api_server.go:103] status: https://192.168.72.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:37.262637   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 21:59:37.267514   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 21:59:37.275551   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 21:59:37.275579   65699 api_server.go:131] duration metric: took 5.513383348s to wait for apiserver health ...
	I0318 21:59:37.275590   65699 cni.go:84] Creating CNI manager for ""
	I0318 21:59:37.275598   65699 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:37.496330   65699 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:37.641915   65699 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:37.659277   65699 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:37.684019   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:38.075296   65699 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:38.075333   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:38.075353   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:38.075367   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:38.075375   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:38.075388   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:38.075407   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:38.075418   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:38.075429   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:38.075440   65699 system_pods.go:74] duration metric: took 391.399859ms to wait for pod list to return data ...
	I0318 21:59:38.075452   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:38.252627   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:38.252659   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:38.252670   65699 node_conditions.go:105] duration metric: took 177.209294ms to run NodePressure ...
	I0318 21:59:38.252692   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.662257   65699 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670807   65699 kubeadm.go:733] kubelet initialised
	I0318 21:59:38.670836   65699 kubeadm.go:734] duration metric: took 8.550399ms waiting for restarted kubelet to initialise ...
	I0318 21:59:38.670846   65699 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:38.680740   65699 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.689134   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689157   65699 pod_ready.go:81] duration metric: took 8.393104ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.689169   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "coredns-76f75df574-6mtzp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.689178   65699 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.693796   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693815   65699 pod_ready.go:81] duration metric: took 4.628403ms for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.693824   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "etcd-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.693829   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.701225   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701245   65699 pod_ready.go:81] duration metric: took 7.410052ms for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.701254   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-apiserver-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.701262   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:38.707848   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707871   65699 pod_ready.go:81] duration metric: took 6.598987ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:38.707882   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:38.707889   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.066641   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066668   65699 pod_ready.go:81] duration metric: took 358.769058ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.066679   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-proxy-kkrzx" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.066687   65699 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.466406   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466440   65699 pod_ready.go:81] duration metric: took 399.746217ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.466449   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "kube-scheduler-no-preload-963041" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.466455   65699 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:39.866206   65699 pod_ready.go:97] node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866232   65699 pod_ready.go:81] duration metric: took 399.76891ms for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:39.866240   65699 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-963041" hosting pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:39.866247   65699 pod_ready.go:38] duration metric: took 1.195391629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:39.866263   65699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 21:59:39.879772   65699 ops.go:34] apiserver oom_adj: -16
	I0318 21:59:39.879796   65699 kubeadm.go:591] duration metric: took 11.146603139s to restartPrimaryControlPlane
	I0318 21:59:39.879807   65699 kubeadm.go:393] duration metric: took 11.21059758s to StartCluster
	I0318 21:59:39.879825   65699 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.879915   65699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:59:39.881739   65699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:39.881970   65699 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 21:59:39.883934   65699 out.go:177] * Verifying Kubernetes components...
	I0318 21:59:39.882064   65699 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 21:59:39.882254   65699 config.go:182] Loaded profile config "no-preload-963041": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 21:59:39.885913   65699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:39.885924   65699 addons.go:69] Setting metrics-server=true in profile "no-preload-963041"
	I0318 21:59:39.885932   65699 addons.go:69] Setting default-storageclass=true in profile "no-preload-963041"
	I0318 21:59:39.885950   65699 addons.go:234] Setting addon metrics-server=true in "no-preload-963041"
	W0318 21:59:39.885958   65699 addons.go:243] addon metrics-server should already be in state true
	I0318 21:59:39.885966   65699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-963041"
	I0318 21:59:39.885918   65699 addons.go:69] Setting storage-provisioner=true in profile "no-preload-963041"
	I0318 21:59:39.885985   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886000   65699 addons.go:234] Setting addon storage-provisioner=true in "no-preload-963041"
	W0318 21:59:39.886052   65699 addons.go:243] addon storage-provisioner should already be in state true
	I0318 21:59:39.886075   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.886384   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886403   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886437   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886392   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.886448   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.886438   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.902103   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0318 21:59:39.902574   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.903192   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.903211   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.903568   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.904113   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.904142   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.908122   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0318 21:59:39.908269   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0318 21:59:39.908566   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.908639   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.909237   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.909251   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.909662   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.909834   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.913534   65699 addons.go:234] Setting addon default-storageclass=true in "no-preload-963041"
	W0318 21:59:39.913558   65699 addons.go:243] addon default-storageclass should already be in state true
	I0318 21:59:39.913586   65699 host.go:66] Checking if "no-preload-963041" exists ...
	I0318 21:59:39.913959   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.913992   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.921260   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.921284   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.921661   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.922725   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.922778   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.925575   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0318 21:59:39.926170   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.926799   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.926819   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.933014   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0318 21:59:39.933066   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.934464   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.934527   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.935441   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.935456   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.936236   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.936821   65699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:59:39.936870   65699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:59:39.936983   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.938986   65699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 21:59:39.940103   65699 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:39.940115   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 21:59:39.940128   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.942712   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943138   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.943168   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.943415   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.943574   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.943690   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.943828   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.944813   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33553
	I0318 21:59:39.961605   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.962117   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.962140   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.962564   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.962745   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.964606   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.970697   65699 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 21:59:35.034845   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:35.535418   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.534613   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.034944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:37.535119   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.035549   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:38.534668   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.034813   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.534586   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:36.222479   65170 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.055805805s)
	I0318 21:59:36.222507   65170 crio.go:469] duration metric: took 3.055923767s to extract the tarball
	I0318 21:59:36.222515   65170 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 21:59:36.265990   65170 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 21:59:36.314679   65170 crio.go:514] all images are preloaded for cri-o runtime.
	I0318 21:59:36.314704   65170 cache_images.go:84] Images are preloaded, skipping loading
	I0318 21:59:36.314714   65170 kubeadm.go:928] updating node { 192.168.50.150 8444 v1.28.4 crio true true} ...
	I0318 21:59:36.314828   65170 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-660775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 21:59:36.314900   65170 ssh_runner.go:195] Run: crio config
	I0318 21:59:36.375889   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:36.375908   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:36.375916   65170 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 21:59:36.375935   65170 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-660775 NodeName:default-k8s-diff-port-660775 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 21:59:36.376058   65170 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-660775"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 21:59:36.376117   65170 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 21:59:36.387851   65170 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 21:59:36.387905   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 21:59:36.398095   65170 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0318 21:59:36.416507   65170 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 21:59:36.437165   65170 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0318 21:59:36.458125   65170 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0318 21:59:36.462688   65170 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 21:59:36.476913   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 21:59:36.629523   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:36.648679   65170 certs.go:68] Setting up /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775 for IP: 192.168.50.150
	I0318 21:59:36.648697   65170 certs.go:194] generating shared ca certs ...
	I0318 21:59:36.648717   65170 certs.go:226] acquiring lock for ca certs: {Name:mk9ff12f9299606f9768ecbdfa24f15ecf095a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 21:59:36.648870   65170 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key
	I0318 21:59:36.648942   65170 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key
	I0318 21:59:36.648956   65170 certs.go:256] generating profile certs ...
	I0318 21:59:36.649061   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/client.key
	I0318 21:59:36.649136   65170 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key.6eb93750
	I0318 21:59:36.649181   65170 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key
	I0318 21:59:36.649342   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem (1338 bytes)
	W0318 21:59:36.649408   65170 certs.go:480] ignoring /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568_empty.pem, impossibly tiny 0 bytes
	I0318 21:59:36.649427   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 21:59:36.649465   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/ca.pem (1078 bytes)
	I0318 21:59:36.649502   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/cert.pem (1123 bytes)
	I0318 21:59:36.649524   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/certs/key.pem (1679 bytes)
	I0318 21:59:36.649563   65170 certs.go:484] found cert: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem (1708 bytes)
	I0318 21:59:36.650116   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 21:59:36.709130   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 21:59:36.777530   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 21:59:36.822349   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 21:59:36.861155   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 21:59:36.899264   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 21:59:36.930697   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 21:59:36.960715   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/default-k8s-diff-port-660775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 21:59:36.992062   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/certs/12568.pem --> /usr/share/ca-certificates/12568.pem (1338 bytes)
	I0318 21:59:37.020001   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/ssl/certs/125682.pem --> /usr/share/ca-certificates/125682.pem (1708 bytes)
	I0318 21:59:37.051443   65170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18421-5321/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 21:59:37.080115   65170 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 21:59:37.102221   65170 ssh_runner.go:195] Run: openssl version
	I0318 21:59:37.111020   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12568.pem && ln -fs /usr/share/ca-certificates/12568.pem /etc/ssl/certs/12568.pem"
	I0318 21:59:37.127447   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132675   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 20:42 /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.132730   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12568.pem
	I0318 21:59:37.139092   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12568.pem /etc/ssl/certs/51391683.0"
	I0318 21:59:37.151349   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125682.pem && ln -fs /usr/share/ca-certificates/125682.pem /etc/ssl/certs/125682.pem"
	I0318 21:59:37.166470   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172601   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 20:42 /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.172656   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125682.pem
	I0318 21:59:37.179404   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125682.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 21:59:37.192628   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 21:59:37.206758   65170 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211839   65170 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 20:32 /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.211882   65170 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 21:59:37.218285   65170 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 21:59:37.230291   65170 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 21:59:37.235312   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 21:59:37.242399   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 21:59:37.249658   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 21:59:37.256458   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 21:59:37.263110   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 21:59:37.270329   65170 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 21:59:37.277040   65170 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-660775 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-660775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 21:59:37.277140   65170 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 21:59:37.277176   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.320525   65170 cri.go:89] found id: ""
	I0318 21:59:37.320595   65170 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 21:59:37.332584   65170 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 21:59:37.332602   65170 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 21:59:37.332608   65170 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 21:59:37.332678   65170 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 21:59:37.348017   65170 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:59:37.349557   65170 kubeconfig.go:125] found "default-k8s-diff-port-660775" server: "https://192.168.50.150:8444"
	I0318 21:59:37.352826   65170 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 21:59:37.367223   65170 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.150
	I0318 21:59:37.367256   65170 kubeadm.go:1154] stopping kube-system containers ...
	I0318 21:59:37.367267   65170 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 21:59:37.367315   65170 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 21:59:37.411319   65170 cri.go:89] found id: ""
	I0318 21:59:37.411401   65170 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 21:59:37.431545   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 21:59:37.442587   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 21:59:37.442610   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 21:59:37.442661   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 21:59:37.452384   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 21:59:37.452439   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 21:59:37.462519   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 21:59:37.472669   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 21:59:37.472728   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 21:59:37.483107   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.493177   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 21:59:37.493224   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 21:59:37.503546   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 21:59:37.513471   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 21:59:37.513512   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 21:59:37.524147   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 21:59:37.534940   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:37.665308   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:38.882330   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.216992532s)
	I0318 21:59:38.882356   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.110948   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.217267   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:39.332300   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 21:59:39.332389   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.833190   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:39.972027   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 21:59:39.972078   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 21:59:39.972109   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.975122   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975608   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.975627   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.975994   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.976196   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.976371   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.976663   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:39.982859   65699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0318 21:59:39.983263   65699 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:59:39.983860   65699 main.go:141] libmachine: Using API Version  1
	I0318 21:59:39.983904   65699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:59:39.984308   65699 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:59:39.984558   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetState
	I0318 21:59:39.986338   65699 main.go:141] libmachine: (no-preload-963041) Calling .DriverName
	I0318 21:59:39.986645   65699 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:39.986690   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 21:59:39.986718   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHHostname
	I0318 21:59:39.989398   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989741   65699 main.go:141] libmachine: (no-preload-963041) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:30:3e", ip: ""} in network mk-no-preload-963041: {Iface:virbr4 ExpiryTime:2024-03-18 22:48:55 +0000 UTC Type:0 Mac:52:54:00:b2:30:3e Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:no-preload-963041 Clientid:01:52:54:00:b2:30:3e}
	I0318 21:59:39.989999   65699 main.go:141] libmachine: (no-preload-963041) DBG | domain no-preload-963041 has defined IP address 192.168.72.84 and MAC address 52:54:00:b2:30:3e in network mk-no-preload-963041
	I0318 21:59:39.989951   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHPort
	I0318 21:59:39.990229   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHKeyPath
	I0318 21:59:39.990392   65699 main.go:141] libmachine: (no-preload-963041) Calling .GetSSHUsername
	I0318 21:59:39.990517   65699 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/no-preload-963041/id_rsa Username:docker}
	I0318 21:59:40.115233   65699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 21:59:40.136271   65699 node_ready.go:35] waiting up to 6m0s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:40.232668   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 21:59:40.234394   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 21:59:40.234417   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 21:59:40.256237   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 21:59:40.301845   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 21:59:40.301873   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 21:59:40.354405   65699 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:40.354435   65699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 21:59:40.377996   65699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 21:59:41.389416   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.156705132s)
	I0318 21:59:41.389429   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.133120616s)
	I0318 21:59:41.389470   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389475   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389482   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389486   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389763   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389783   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389792   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389799   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.389828   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.389874   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.389890   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.389899   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.389938   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.390199   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390398   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.390339   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390375   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.390451   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.390470   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.397714   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.397736   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.397951   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.397999   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.398017   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.415620   65699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.037584799s)
	I0318 21:59:41.415673   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.415684   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.415964   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.415992   65699 main.go:141] libmachine: (no-preload-963041) DBG | Closing plugin on server side
	I0318 21:59:41.416007   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416016   65699 main.go:141] libmachine: Making call to close driver server
	I0318 21:59:41.416027   65699 main.go:141] libmachine: (no-preload-963041) Calling .Close
	I0318 21:59:41.416207   65699 main.go:141] libmachine: Successfully made call to close driver server
	I0318 21:59:41.416220   65699 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 21:59:41.416229   65699 addons.go:470] Verifying addon metrics-server=true in "no-preload-963041"
	I0318 21:59:41.418761   65699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 21:59:38.798943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:40.800913   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:41.420038   65699 addons.go:505] duration metric: took 1.537986468s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 21:59:40.332810   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.411342   65170 api_server.go:72] duration metric: took 1.079036948s to wait for apiserver process to appear ...
	I0318 21:59:40.411371   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 21:59:40.411394   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:40.411932   65170 api_server.go:269] stopped: https://192.168.50.150:8444/healthz: Get "https://192.168.50.150:8444/healthz": dial tcp 192.168.50.150:8444: connect: connection refused
	I0318 21:59:40.911545   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.377410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.377443   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.377471   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.426410   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0318 21:59:43.426468   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0318 21:59:43.426485   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.448464   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.448523   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:43.912498   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:43.918271   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:43.918309   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.411824   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.422200   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 21:59:44.422223   65170 api_server.go:103] status: https://192.168.50.150:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 21:59:44.911509   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 21:59:44.916884   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 21:59:44.928835   65170 api_server.go:141] control plane version: v1.28.4
	I0318 21:59:44.928862   65170 api_server.go:131] duration metric: took 4.517483413s to wait for apiserver health ...
	I0318 21:59:44.928872   65170 cni.go:84] Creating CNI manager for ""
	I0318 21:59:44.928881   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 21:59:44.930794   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 21:59:40.035532   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:40.535482   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.035196   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:41.534632   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.035183   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:42.535562   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.034598   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:43.534971   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.034552   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.535025   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:44.932164   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 21:59:44.959217   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 21:59:45.002449   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 21:59:45.017348   65170 system_pods.go:59] 8 kube-system pods found
	I0318 21:59:45.017394   65170 system_pods.go:61] "coredns-5dd5756b68-cjq2v" [9ae899ef-63e4-407d-9013-71552ec87614] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 21:59:45.017407   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [286b98ba-bc9e-4e2f-984c-d7b2447aef15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 21:59:45.017417   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [7a0db461-f8d5-4331-993e-d7b9345159e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 21:59:45.017428   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [e4f5859a-dfcc-41d8-9a17-acb601449821] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 21:59:45.017443   65170 system_pods.go:61] "kube-proxy-qt2m6" [c3c7c6db-4935-4079-b0e7-60ba2cd886b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 21:59:45.017450   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [7115eef0-5ff4-4dfe-9135-88ad8f698e43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 21:59:45.017461   65170 system_pods.go:61] "metrics-server-57f55c9bc5-5dtf5" [b19191ee-e2db-4392-82e2-1a95fae76101] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 21:59:45.017489   65170 system_pods.go:61] "storage-provisioner" [045d4b30-47a3-4c80-a9e8-c36ef7395e6c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 21:59:45.017498   65170 system_pods.go:74] duration metric: took 15.027239ms to wait for pod list to return data ...
	I0318 21:59:45.017511   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 21:59:45.020962   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 21:59:45.020982   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 21:59:45.020991   65170 node_conditions.go:105] duration metric: took 3.47292ms to run NodePressure ...
	I0318 21:59:45.021007   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 21:59:45.277662   65170 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282939   65170 kubeadm.go:733] kubelet initialised
	I0318 21:59:45.282958   65170 kubeadm.go:734] duration metric: took 5.277143ms waiting for restarted kubelet to initialise ...
	I0318 21:59:45.282965   65170 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.289546   65170 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:43.299509   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:45.300875   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:42.142145   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:44.641863   65699 node_ready.go:53] node "no-preload-963041" has status "Ready":"False"
	I0318 21:59:45.640660   65699 node_ready.go:49] node "no-preload-963041" has status "Ready":"True"
	I0318 21:59:45.640686   65699 node_ready.go:38] duration metric: took 5.50437071s for node "no-preload-963041" to be "Ready" ...
	I0318 21:59:45.640697   65699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 21:59:45.647087   65699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652062   65699 pod_ready.go:92] pod "coredns-76f75df574-6mtzp" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.652081   65699 pod_ready.go:81] duration metric: took 4.969873ms for pod "coredns-76f75df574-6mtzp" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.652091   65699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.035239   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.535303   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.034742   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:46.534584   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.034935   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:47.534952   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.034610   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:48.534497   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.035380   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:49.535498   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:45.296790   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298834   65170 pod_ready.go:81] duration metric: took 9.259848ms for pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.298849   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "coredns-5dd5756b68-cjq2v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.298868   65170 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.307325   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307367   65170 pod_ready.go:81] duration metric: took 8.486967ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.307380   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.307389   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.319473   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319498   65170 pod_ready.go:81] duration metric: took 12.100242ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.319514   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.319522   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.407356   65170 pod_ready.go:97] node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407379   65170 pod_ready.go:81] duration metric: took 87.846686ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	E0318 21:59:45.407390   65170 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-660775" hosting pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-660775" has status "Ready":"False"
	I0318 21:59:45.407395   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806835   65170 pod_ready.go:92] pod "kube-proxy-qt2m6" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:45.806866   65170 pod_ready.go:81] duration metric: took 399.462221ms for pod "kube-proxy-qt2m6" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:45.806878   65170 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:47.814286   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:47.799616   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:50.300118   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:46.659819   65699 pod_ready.go:92] pod "etcd-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:46.659855   65699 pod_ready.go:81] duration metric: took 1.007755238s for pod "etcd-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:46.659868   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:48.669033   65699 pod_ready.go:102] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:51.168202   65699 pod_ready.go:92] pod "kube-apiserver-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.168229   65699 pod_ready.go:81] duration metric: took 4.508354098s for pod "kube-apiserver-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.168240   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174243   65699 pod_ready.go:92] pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.174268   65699 pod_ready.go:81] duration metric: took 6.018685ms for pod "kube-controller-manager-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.174280   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179279   65699 pod_ready.go:92] pod "kube-proxy-kkrzx" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.179300   65699 pod_ready.go:81] duration metric: took 5.012711ms for pod "kube-proxy-kkrzx" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.179311   65699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185651   65699 pod_ready.go:92] pod "kube-scheduler-no-preload-963041" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:51.185670   65699 pod_ready.go:81] duration metric: took 6.351567ms for pod "kube-scheduler-no-preload-963041" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:51.185678   65699 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:50.034691   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.534680   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:51.535213   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.034594   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:52.535195   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.034574   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:53.535423   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.035369   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:54.534621   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:50.315135   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.814432   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:52.798645   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:54.800561   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:53.191834   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.192346   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:55.035308   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:55.535503   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.035231   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:56.534937   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.035317   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:57.534581   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.034565   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:58.534830   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.034910   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:59:59.535280   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 21:59:59.535354   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 21:59:59.577600   65622 cri.go:89] found id: ""
	I0318 21:59:59.577632   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.577643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 21:59:59.577651   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 21:59:59.577710   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 21:59:59.614134   65622 cri.go:89] found id: ""
	I0318 21:59:59.614158   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.614166   65622 logs.go:278] No container was found matching "etcd"
	I0318 21:59:59.614171   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 21:59:59.614245   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 21:59:59.653525   65622 cri.go:89] found id: ""
	I0318 21:59:59.653559   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.653571   65622 logs.go:278] No container was found matching "coredns"
	I0318 21:59:59.653578   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 21:59:59.653633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 21:59:59.699104   65622 cri.go:89] found id: ""
	I0318 21:59:59.699128   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.699139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 21:59:59.699146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 21:59:59.699214   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 21:59:59.735750   65622 cri.go:89] found id: ""
	I0318 21:59:59.735779   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.735789   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 21:59:59.735796   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 21:59:59.735876   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 21:59:59.775105   65622 cri.go:89] found id: ""
	I0318 21:59:59.775134   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.775142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 21:59:59.775149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 21:59:59.775193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 21:59:59.814154   65622 cri.go:89] found id: ""
	I0318 21:59:59.814181   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.814190   65622 logs.go:278] No container was found matching "kindnet"
	I0318 21:59:59.814197   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 21:59:59.814254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 21:59:59.852518   65622 cri.go:89] found id: ""
	I0318 21:59:59.852545   65622 logs.go:276] 0 containers: []
	W0318 21:59:59.852556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 21:59:59.852565   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 21:59:59.852578   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 21:59:59.907243   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 21:59:59.907285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 21:59:59.922512   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 21:59:59.922540   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 21:59:55.313448   65170 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.813863   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 21:59:56.813885   65170 pod_ready.go:81] duration metric: took 11.006997984s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:56.813893   65170 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	I0318 21:59:58.820535   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:56.802709   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:59.299235   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:01.299761   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 21:59:57.694309   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:00.192594   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	W0318 22:00:00.059182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:00.059202   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:00.059216   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:00.125654   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:00.125686   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:02.675440   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:02.689549   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:02.689628   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:02.731742   65622 cri.go:89] found id: ""
	I0318 22:00:02.731764   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.731771   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:02.731776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:02.731823   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:02.809611   65622 cri.go:89] found id: ""
	I0318 22:00:02.809643   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.809651   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:02.809656   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:02.809699   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:02.853939   65622 cri.go:89] found id: ""
	I0318 22:00:02.853972   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.853982   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:02.853990   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:02.854050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:02.892668   65622 cri.go:89] found id: ""
	I0318 22:00:02.892699   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.892709   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:02.892715   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:02.892773   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:02.934267   65622 cri.go:89] found id: ""
	I0318 22:00:02.934296   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.934307   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:02.934313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:02.934370   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:02.972533   65622 cri.go:89] found id: ""
	I0318 22:00:02.972556   65622 logs.go:276] 0 containers: []
	W0318 22:00:02.972564   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:02.972569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:02.972614   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:03.011102   65622 cri.go:89] found id: ""
	I0318 22:00:03.011128   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.011137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:03.011142   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:03.011188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:03.060636   65622 cri.go:89] found id: ""
	I0318 22:00:03.060664   65622 logs.go:276] 0 containers: []
	W0318 22:00:03.060673   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:03.060696   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:03.060710   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:03.145042   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:03.145070   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:03.145087   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:03.218475   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:03.218504   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:03.262154   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:03.262185   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:03.316766   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:03.316803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:00.821070   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.821300   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:03.301922   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.799844   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:02.693235   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:04.693324   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:05.833936   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:05.850780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:05.850858   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:05.894909   65622 cri.go:89] found id: ""
	I0318 22:00:05.894931   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.894938   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:05.894944   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:05.894987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:05.935989   65622 cri.go:89] found id: ""
	I0318 22:00:05.936020   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.936028   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:05.936032   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:05.936081   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:05.976774   65622 cri.go:89] found id: ""
	I0318 22:00:05.976797   65622 logs.go:276] 0 containers: []
	W0318 22:00:05.976805   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:05.976811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:05.976869   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:06.015350   65622 cri.go:89] found id: ""
	I0318 22:00:06.015376   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.015387   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:06.015394   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:06.015453   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:06.059389   65622 cri.go:89] found id: ""
	I0318 22:00:06.059416   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.059427   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:06.059434   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:06.059513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:06.099524   65622 cri.go:89] found id: ""
	I0318 22:00:06.099544   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.099553   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:06.099558   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:06.099601   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:06.140343   65622 cri.go:89] found id: ""
	I0318 22:00:06.140374   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.140386   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:06.140393   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:06.140448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:06.179217   65622 cri.go:89] found id: ""
	I0318 22:00:06.179247   65622 logs.go:276] 0 containers: []
	W0318 22:00:06.179257   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:06.179268   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:06.179286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:06.231348   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:06.231379   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:06.246049   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:06.246084   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:06.326182   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:06.326203   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:06.326215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:06.405862   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:06.405895   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:08.955965   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:08.970007   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:08.970076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:09.008724   65622 cri.go:89] found id: ""
	I0318 22:00:09.008752   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.008764   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:09.008781   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:09.008856   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:09.050121   65622 cri.go:89] found id: ""
	I0318 22:00:09.050158   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.050165   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:09.050170   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:09.050227   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:09.090263   65622 cri.go:89] found id: ""
	I0318 22:00:09.090293   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.090304   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:09.090312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:09.090375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:09.127645   65622 cri.go:89] found id: ""
	I0318 22:00:09.127679   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.127690   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:09.127697   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:09.127755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:09.169171   65622 cri.go:89] found id: ""
	I0318 22:00:09.169199   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.169211   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:09.169218   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:09.169278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:09.209923   65622 cri.go:89] found id: ""
	I0318 22:00:09.209949   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.209956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:09.209963   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:09.210013   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:09.247990   65622 cri.go:89] found id: ""
	I0318 22:00:09.248029   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.248039   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:09.248050   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:09.248109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:09.287287   65622 cri.go:89] found id: ""
	I0318 22:00:09.287326   65622 logs.go:276] 0 containers: []
	W0318 22:00:09.287337   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:09.287347   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:09.287369   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:09.342877   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:09.342902   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:09.359137   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:09.359159   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:09.454504   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:09.454528   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:09.454543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:09.549191   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:09.549223   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:05.322655   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.820557   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.821227   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:07.799881   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.802803   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:06.694723   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:09.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.096415   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:12.112886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:12.112969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:12.155639   65622 cri.go:89] found id: ""
	I0318 22:00:12.155662   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.155670   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:12.155676   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:12.155729   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:12.199252   65622 cri.go:89] found id: ""
	I0318 22:00:12.199283   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.199293   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:12.199301   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:12.199385   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:12.239688   65622 cri.go:89] found id: ""
	I0318 22:00:12.239719   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.239728   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:12.239734   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:12.239788   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:12.278610   65622 cri.go:89] found id: ""
	I0318 22:00:12.278640   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.278651   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:12.278659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:12.278724   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:12.318834   65622 cri.go:89] found id: ""
	I0318 22:00:12.318864   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.318873   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:12.318881   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:12.318939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:12.358964   65622 cri.go:89] found id: ""
	I0318 22:00:12.358986   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.358994   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:12.359002   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:12.359050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:12.399041   65622 cri.go:89] found id: ""
	I0318 22:00:12.399070   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.399080   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:12.399087   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:12.399151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:12.445019   65622 cri.go:89] found id: ""
	I0318 22:00:12.445043   65622 logs.go:276] 0 containers: []
	W0318 22:00:12.445053   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:12.445064   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:12.445079   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:12.504987   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:12.505023   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:12.521381   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:12.521408   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:12.601574   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:12.601599   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:12.601615   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:12.683772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:12.683801   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:11.821593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:13.821792   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:12.299680   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.300073   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:11.693179   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:14.194532   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:15.229005   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:15.248227   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:15.248296   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:15.307918   65622 cri.go:89] found id: ""
	I0318 22:00:15.307940   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.307947   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:15.307953   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:15.307997   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:15.367388   65622 cri.go:89] found id: ""
	I0318 22:00:15.367417   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.367436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:15.367453   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:15.367513   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:15.410880   65622 cri.go:89] found id: ""
	I0318 22:00:15.410910   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.410919   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:15.410926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:15.410983   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:15.450980   65622 cri.go:89] found id: ""
	I0318 22:00:15.451004   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.451011   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:15.451018   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:15.451071   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:15.491196   65622 cri.go:89] found id: ""
	I0318 22:00:15.491222   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.491233   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:15.491239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:15.491284   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:15.537135   65622 cri.go:89] found id: ""
	I0318 22:00:15.537159   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.537166   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:15.537173   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:15.537226   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:15.580730   65622 cri.go:89] found id: ""
	I0318 22:00:15.580762   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.580772   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:15.580780   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:15.580852   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:15.626221   65622 cri.go:89] found id: ""
	I0318 22:00:15.626252   65622 logs.go:276] 0 containers: []
	W0318 22:00:15.626265   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:15.626276   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:15.626292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.670571   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:15.670600   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:15.725485   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:15.725519   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:15.742790   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:15.742820   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:15.824867   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:15.824889   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:15.824924   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.407070   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:18.421757   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:18.421824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:18.461024   65622 cri.go:89] found id: ""
	I0318 22:00:18.461044   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.461052   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:18.461058   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:18.461104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:18.499002   65622 cri.go:89] found id: ""
	I0318 22:00:18.499032   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.499040   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:18.499046   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:18.499091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:18.539207   65622 cri.go:89] found id: ""
	I0318 22:00:18.539237   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.539248   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:18.539255   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:18.539315   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:18.579691   65622 cri.go:89] found id: ""
	I0318 22:00:18.579717   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.579726   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:18.579733   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:18.579814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:18.625084   65622 cri.go:89] found id: ""
	I0318 22:00:18.625111   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.625120   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:18.625126   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:18.625178   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:18.669012   65622 cri.go:89] found id: ""
	I0318 22:00:18.669038   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.669047   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:18.669053   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:18.669101   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:18.707523   65622 cri.go:89] found id: ""
	I0318 22:00:18.707544   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.707551   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:18.707557   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:18.707611   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:18.755138   65622 cri.go:89] found id: ""
	I0318 22:00:18.755162   65622 logs.go:276] 0 containers: []
	W0318 22:00:18.755173   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:18.755184   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:18.755199   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:18.809140   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:18.809163   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:18.827102   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:18.827125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:18.904168   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:18.904194   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:18.904209   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:18.982438   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:18.982471   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:15.822593   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.321691   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.798687   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.802403   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.302525   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:16.692709   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:18.692875   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:20.693620   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:21.532643   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:21.547477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:21.547545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:21.585013   65622 cri.go:89] found id: ""
	I0318 22:00:21.585038   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.585049   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:21.585056   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:21.585114   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:21.628115   65622 cri.go:89] found id: ""
	I0318 22:00:21.628139   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.628147   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:21.628153   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:21.628207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:21.664896   65622 cri.go:89] found id: ""
	I0318 22:00:21.664931   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.664942   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:21.664948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:21.665010   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:21.705770   65622 cri.go:89] found id: ""
	I0318 22:00:21.705794   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.705803   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:21.705811   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:21.705868   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:21.751268   65622 cri.go:89] found id: ""
	I0318 22:00:21.751296   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.751305   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:21.751313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:21.751376   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:21.798688   65622 cri.go:89] found id: ""
	I0318 22:00:21.798714   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.798724   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:21.798732   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:21.798800   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:21.839253   65622 cri.go:89] found id: ""
	I0318 22:00:21.839281   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.839290   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:21.839297   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:21.839365   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:21.884026   65622 cri.go:89] found id: ""
	I0318 22:00:21.884055   65622 logs.go:276] 0 containers: []
	W0318 22:00:21.884068   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:21.884086   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:21.884105   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:21.940412   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:21.940446   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:21.956634   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:21.956660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:22.031458   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:22.031481   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:22.031497   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:22.115902   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:22.115932   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:24.665945   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:24.680474   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:24.680545   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:24.719692   65622 cri.go:89] found id: ""
	I0318 22:00:24.719711   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.719718   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:24.719723   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:24.719768   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:24.760734   65622 cri.go:89] found id: ""
	I0318 22:00:24.760758   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.760767   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:24.760775   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:24.760830   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:24.802688   65622 cri.go:89] found id: ""
	I0318 22:00:24.802710   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.802717   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:24.802723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:24.802778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:24.842693   65622 cri.go:89] found id: ""
	I0318 22:00:24.842715   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.842723   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:24.842730   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:24.842796   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:24.887149   65622 cri.go:89] found id: ""
	I0318 22:00:24.887173   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.887185   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:24.887195   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:24.887278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:24.926465   65622 cri.go:89] found id: ""
	I0318 22:00:24.926511   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.926522   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:24.926530   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:24.926584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:24.966876   65622 cri.go:89] found id: ""
	I0318 22:00:24.966897   65622 logs.go:276] 0 containers: []
	W0318 22:00:24.966904   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:24.966910   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:24.966957   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:20.820297   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:22.821250   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:24.825337   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.800104   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:26.299105   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:23.193665   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.194188   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:25.007251   65622 cri.go:89] found id: ""
	I0318 22:00:25.007277   65622 logs.go:276] 0 containers: []
	W0318 22:00:25.007288   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:25.007298   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:25.007311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:25.092214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:25.092235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:25.092247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:25.173041   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:25.173076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:25.221169   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:25.221194   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:25.276322   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:25.276352   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:27.792368   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:27.809294   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:27.809359   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:27.848976   65622 cri.go:89] found id: ""
	I0318 22:00:27.849005   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.849015   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:27.849023   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:27.849076   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:27.890416   65622 cri.go:89] found id: ""
	I0318 22:00:27.890437   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.890445   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:27.890450   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:27.890505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:27.934782   65622 cri.go:89] found id: ""
	I0318 22:00:27.934807   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.934819   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:27.934827   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:27.934911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:27.972251   65622 cri.go:89] found id: ""
	I0318 22:00:27.972275   65622 logs.go:276] 0 containers: []
	W0318 22:00:27.972283   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:27.972288   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:27.972366   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:28.011321   65622 cri.go:89] found id: ""
	I0318 22:00:28.011345   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.011357   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:28.011363   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:28.011421   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:28.048087   65622 cri.go:89] found id: ""
	I0318 22:00:28.048109   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.048116   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:28.048122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:28.048169   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:28.088840   65622 cri.go:89] found id: ""
	I0318 22:00:28.088868   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.088878   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:28.088886   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:28.088961   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:28.128687   65622 cri.go:89] found id: ""
	I0318 22:00:28.128714   65622 logs.go:276] 0 containers: []
	W0318 22:00:28.128723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:28.128733   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:28.128745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:28.170853   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:28.170882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:28.224825   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:28.224850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:28.239744   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:28.239773   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:28.318640   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:28.318664   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:28.318680   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:27.321417   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:29.326924   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:28.798399   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.800456   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:27.692517   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.194633   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:30.897430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:30.914894   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:30.914950   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:30.952709   65622 cri.go:89] found id: ""
	I0318 22:00:30.952737   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.952748   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:30.952756   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:30.952814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:30.991113   65622 cri.go:89] found id: ""
	I0318 22:00:30.991142   65622 logs.go:276] 0 containers: []
	W0318 22:00:30.991151   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:30.991159   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:30.991218   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:31.030248   65622 cri.go:89] found id: ""
	I0318 22:00:31.030273   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.030283   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:31.030291   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:31.030356   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:31.070836   65622 cri.go:89] found id: ""
	I0318 22:00:31.070860   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.070868   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:31.070874   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:31.070941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:31.109134   65622 cri.go:89] found id: ""
	I0318 22:00:31.109154   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.109162   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:31.109167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:31.109222   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:31.149757   65622 cri.go:89] found id: ""
	I0318 22:00:31.149784   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.149794   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:31.149802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:31.149862   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:31.190355   65622 cri.go:89] found id: ""
	I0318 22:00:31.190383   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.190393   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:31.190401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:31.190462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:31.229866   65622 cri.go:89] found id: ""
	I0318 22:00:31.229892   65622 logs.go:276] 0 containers: []
	W0318 22:00:31.229900   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:31.229909   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:31.229926   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:31.284984   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:31.285027   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:31.301026   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:31.301050   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:31.378120   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:31.378143   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:31.378158   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:31.459445   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:31.459475   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:34.003989   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:34.020959   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:34.021012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:34.060045   65622 cri.go:89] found id: ""
	I0318 22:00:34.060074   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.060086   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:34.060103   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:34.060151   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:34.101259   65622 cri.go:89] found id: ""
	I0318 22:00:34.101289   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.101299   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:34.101307   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:34.101372   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:34.141056   65622 cri.go:89] found id: ""
	I0318 22:00:34.141085   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.141096   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:34.141103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:34.141166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:34.179757   65622 cri.go:89] found id: ""
	I0318 22:00:34.179786   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.179797   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:34.179805   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:34.179872   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:34.221928   65622 cri.go:89] found id: ""
	I0318 22:00:34.221956   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.221989   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:34.221998   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:34.222063   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:34.260775   65622 cri.go:89] found id: ""
	I0318 22:00:34.260796   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.260804   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:34.260809   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:34.260866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:34.300910   65622 cri.go:89] found id: ""
	I0318 22:00:34.300936   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.300944   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:34.300950   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:34.300994   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:34.343581   65622 cri.go:89] found id: ""
	I0318 22:00:34.343611   65622 logs.go:276] 0 containers: []
	W0318 22:00:34.343619   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:34.343628   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:34.343640   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:34.399298   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:34.399330   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:34.414580   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:34.414619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:34.488013   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:34.488031   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:34.488043   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:34.580958   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:34.580994   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:31.821301   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:34.322210   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:33.299227   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.800314   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:32.693924   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:35.191865   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.129601   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:37.147758   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:37.147827   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:37.194763   65622 cri.go:89] found id: ""
	I0318 22:00:37.194784   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.194791   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:37.194797   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:37.194845   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:37.236298   65622 cri.go:89] found id: ""
	I0318 22:00:37.236326   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.236334   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:37.236353   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:37.236488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:37.274776   65622 cri.go:89] found id: ""
	I0318 22:00:37.274803   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.274813   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:37.274819   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:37.274883   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:37.319360   65622 cri.go:89] found id: ""
	I0318 22:00:37.319385   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.319395   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:37.319401   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:37.319463   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:37.365699   65622 cri.go:89] found id: ""
	I0318 22:00:37.365726   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.365734   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:37.365740   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:37.365824   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:37.404758   65622 cri.go:89] found id: ""
	I0318 22:00:37.404789   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.404799   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:37.404807   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:37.404874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:37.444567   65622 cri.go:89] found id: ""
	I0318 22:00:37.444591   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.444598   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:37.444603   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:37.444665   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:37.487729   65622 cri.go:89] found id: ""
	I0318 22:00:37.487752   65622 logs.go:276] 0 containers: []
	W0318 22:00:37.487760   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:37.487767   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:37.487786   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:37.566214   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:37.566235   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:37.566258   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:37.647847   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:37.647930   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:37.693027   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:37.693057   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:37.748111   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:37.748152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:36.324995   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.820800   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:38.298887   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.299570   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:37.193636   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:39.693273   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:40.277510   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:40.292312   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:40.292384   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:40.330335   65622 cri.go:89] found id: ""
	I0318 22:00:40.330368   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.330379   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:40.330386   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:40.330441   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:40.372534   65622 cri.go:89] found id: ""
	I0318 22:00:40.372560   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.372570   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:40.372577   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:40.372624   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:40.409430   65622 cri.go:89] found id: ""
	I0318 22:00:40.409460   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.409471   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:40.409478   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:40.409525   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:40.448350   65622 cri.go:89] found id: ""
	I0318 22:00:40.448372   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.448380   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:40.448385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:40.448431   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:40.490526   65622 cri.go:89] found id: ""
	I0318 22:00:40.490550   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.490559   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:40.490564   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:40.490613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:40.528926   65622 cri.go:89] found id: ""
	I0318 22:00:40.528953   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.528963   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:40.528971   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:40.529031   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:40.565779   65622 cri.go:89] found id: ""
	I0318 22:00:40.565808   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.565818   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:40.565826   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:40.565902   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:40.604152   65622 cri.go:89] found id: ""
	I0318 22:00:40.604181   65622 logs.go:276] 0 containers: []
	W0318 22:00:40.604192   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:40.604201   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:40.604215   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:40.689274   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:40.689310   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:40.736810   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:40.736844   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:40.796033   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:40.796061   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:40.811906   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:40.811929   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:40.889595   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:43.390663   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:43.407179   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:43.407254   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:43.448653   65622 cri.go:89] found id: ""
	I0318 22:00:43.448685   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.448696   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:43.448704   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:43.448772   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:43.489437   65622 cri.go:89] found id: ""
	I0318 22:00:43.489464   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.489472   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:43.489478   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:43.489533   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:43.564173   65622 cri.go:89] found id: ""
	I0318 22:00:43.564199   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.564209   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:43.564217   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:43.564278   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:43.606221   65622 cri.go:89] found id: ""
	I0318 22:00:43.606250   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.606260   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:43.606267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:43.606333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:43.646748   65622 cri.go:89] found id: ""
	I0318 22:00:43.646782   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.646794   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:43.646802   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:43.646864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:43.690465   65622 cri.go:89] found id: ""
	I0318 22:00:43.690496   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.690509   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:43.690519   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:43.690584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:43.730421   65622 cri.go:89] found id: ""
	I0318 22:00:43.730454   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.730464   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:43.730473   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:43.730538   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:43.769597   65622 cri.go:89] found id: ""
	I0318 22:00:43.769626   65622 logs.go:276] 0 containers: []
	W0318 22:00:43.769636   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:43.769646   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:43.769660   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:43.858316   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:43.858351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:43.907387   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:43.907417   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:43.963234   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:43.963271   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:43.979226   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:43.979253   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:44.065174   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:40.821224   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:43.319945   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.300484   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.300924   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.302264   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:42.192508   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:44.192743   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.566048   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:46.583140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:46.583212   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:46.624593   65622 cri.go:89] found id: ""
	I0318 22:00:46.624634   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.624643   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:46.624649   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:46.624700   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:46.664828   65622 cri.go:89] found id: ""
	I0318 22:00:46.664858   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.664868   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:46.664874   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:46.664944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:46.703632   65622 cri.go:89] found id: ""
	I0318 22:00:46.703658   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.703668   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:46.703675   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:46.703736   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:46.743379   65622 cri.go:89] found id: ""
	I0318 22:00:46.743409   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.743420   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:46.743427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:46.743487   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:46.784145   65622 cri.go:89] found id: ""
	I0318 22:00:46.784169   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.784178   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:46.784184   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:46.784233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:46.826469   65622 cri.go:89] found id: ""
	I0318 22:00:46.826491   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.826498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:46.826504   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:46.826559   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:46.868061   65622 cri.go:89] found id: ""
	I0318 22:00:46.868089   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.868102   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:46.868110   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:46.868167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:46.910584   65622 cri.go:89] found id: ""
	I0318 22:00:46.910612   65622 logs.go:276] 0 containers: []
	W0318 22:00:46.910622   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:46.910630   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:46.910642   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:46.954131   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:46.954157   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:47.008706   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:47.008737   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:47.024447   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:47.024474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:47.113208   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:47.113228   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:47.113242   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:49.699416   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:49.714870   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:49.714943   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:49.754386   65622 cri.go:89] found id: ""
	I0318 22:00:49.754415   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.754424   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:49.754430   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:49.754485   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:49.800223   65622 cri.go:89] found id: ""
	I0318 22:00:49.800248   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.800258   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:49.800268   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:49.800331   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:49.846747   65622 cri.go:89] found id: ""
	I0318 22:00:49.846775   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.846785   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:49.846793   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:49.846842   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:49.885554   65622 cri.go:89] found id: ""
	I0318 22:00:49.885581   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.885592   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:49.885600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:49.885652   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:49.925116   65622 cri.go:89] found id: ""
	I0318 22:00:49.925136   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.925144   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:49.925149   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:49.925193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:49.968467   65622 cri.go:89] found id: ""
	I0318 22:00:49.968491   65622 logs.go:276] 0 containers: []
	W0318 22:00:49.968498   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:49.968503   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:49.968575   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:45.321277   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:47.821205   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.822803   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:48.799135   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.801798   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:46.692554   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:49.193102   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:51.194134   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:50.016222   65622 cri.go:89] found id: ""
	I0318 22:00:50.016253   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.016261   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:50.016267   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:50.016320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:50.057053   65622 cri.go:89] found id: ""
	I0318 22:00:50.057074   65622 logs.go:276] 0 containers: []
	W0318 22:00:50.057082   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:50.057090   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:50.057101   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:50.137602   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:50.137631   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:50.213200   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:50.213227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:50.293533   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:50.293568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:50.312993   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:50.313019   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:50.399235   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:52.900027   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:52.914846   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:52.914918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:52.951864   65622 cri.go:89] found id: ""
	I0318 22:00:52.951887   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.951895   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:52.951900   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:52.951959   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:52.992339   65622 cri.go:89] found id: ""
	I0318 22:00:52.992374   65622 logs.go:276] 0 containers: []
	W0318 22:00:52.992386   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:52.992393   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:52.992448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:53.030499   65622 cri.go:89] found id: ""
	I0318 22:00:53.030527   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.030536   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:53.030543   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:53.030610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:53.069607   65622 cri.go:89] found id: ""
	I0318 22:00:53.069635   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.069645   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:53.069652   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:53.069706   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:53.110235   65622 cri.go:89] found id: ""
	I0318 22:00:53.110256   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.110263   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:53.110269   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:53.110320   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:53.152066   65622 cri.go:89] found id: ""
	I0318 22:00:53.152092   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.152100   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:53.152106   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:53.152166   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:53.195360   65622 cri.go:89] found id: ""
	I0318 22:00:53.195386   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.195395   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:53.195402   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:53.195448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:53.235134   65622 cri.go:89] found id: ""
	I0318 22:00:53.235159   65622 logs.go:276] 0 containers: []
	W0318 22:00:53.235166   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:53.235174   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:53.235186   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:53.286442   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:53.286473   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:53.342152   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:53.342183   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:53.358414   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:53.358438   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:53.430515   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:53.430534   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:53.430545   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:52.320478   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:54.321815   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.301031   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:55.799954   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:53.693639   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.193657   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:56.016088   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:56.034274   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:56.034350   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:56.095539   65622 cri.go:89] found id: ""
	I0318 22:00:56.095565   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.095581   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:56.095588   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:56.095645   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:56.149796   65622 cri.go:89] found id: ""
	I0318 22:00:56.149824   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.149834   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:56.149845   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:56.149907   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:56.205720   65622 cri.go:89] found id: ""
	I0318 22:00:56.205745   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.205760   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:56.205768   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:56.205828   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:56.250790   65622 cri.go:89] found id: ""
	I0318 22:00:56.250834   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.250862   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:56.250876   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:56.250944   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:56.290516   65622 cri.go:89] found id: ""
	I0318 22:00:56.290538   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.290545   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:56.290552   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:56.290609   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:56.335528   65622 cri.go:89] found id: ""
	I0318 22:00:56.335557   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.335570   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:56.335577   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:56.335638   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:56.380336   65622 cri.go:89] found id: ""
	I0318 22:00:56.380365   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.380376   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:56.380383   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:56.380448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:56.426326   65622 cri.go:89] found id: ""
	I0318 22:00:56.426351   65622 logs.go:276] 0 containers: []
	W0318 22:00:56.426359   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:56.426368   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:56.426385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:56.479966   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:56.480002   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.495557   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:56.495588   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:56.573474   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:56.573495   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:56.573506   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:56.657795   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:56.657826   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.206212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:00:59.221879   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:00:59.221936   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:00:59.265944   65622 cri.go:89] found id: ""
	I0318 22:00:59.265976   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.265986   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:00:59.265994   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:00:59.266052   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:00:59.305105   65622 cri.go:89] found id: ""
	I0318 22:00:59.305125   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.305132   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:00:59.305137   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:00:59.305182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:00:59.343573   65622 cri.go:89] found id: ""
	I0318 22:00:59.343600   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.343610   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:00:59.343618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:00:59.343674   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:00:59.385560   65622 cri.go:89] found id: ""
	I0318 22:00:59.385580   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.385587   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:00:59.385592   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:00:59.385639   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:00:59.422955   65622 cri.go:89] found id: ""
	I0318 22:00:59.422983   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.422994   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:00:59.423001   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:00:59.423062   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:00:59.460526   65622 cri.go:89] found id: ""
	I0318 22:00:59.460550   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.460561   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:00:59.460569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:00:59.460627   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:00:59.502703   65622 cri.go:89] found id: ""
	I0318 22:00:59.502732   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.502739   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:00:59.502753   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:00:59.502803   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:00:59.539097   65622 cri.go:89] found id: ""
	I0318 22:00:59.539120   65622 logs.go:276] 0 containers: []
	W0318 22:00:59.539128   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:00:59.539136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:00:59.539147   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:00:59.613607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:00:59.613628   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:00:59.613643   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:00:59.697432   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:00:59.697460   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:00:59.744643   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:00:59.744671   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:00:59.800670   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:00:59.800704   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:00:56.820977   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.822348   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:57.804405   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.299016   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:00:58.692166   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:00.692526   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.318430   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:02.334082   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:02.334158   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:02.383122   65622 cri.go:89] found id: ""
	I0318 22:01:02.383151   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.383161   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:02.383169   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:02.383229   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:02.426847   65622 cri.go:89] found id: ""
	I0318 22:01:02.426874   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.426884   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:02.426891   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:02.426955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:02.466377   65622 cri.go:89] found id: ""
	I0318 22:01:02.466403   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.466429   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:02.466437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:02.466501   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:02.506916   65622 cri.go:89] found id: ""
	I0318 22:01:02.506943   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.506953   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:02.506961   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:02.507021   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:02.549401   65622 cri.go:89] found id: ""
	I0318 22:01:02.549431   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.549439   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:02.549445   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:02.549494   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:02.589498   65622 cri.go:89] found id: ""
	I0318 22:01:02.589524   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.589535   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:02.589542   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:02.589603   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:02.626325   65622 cri.go:89] found id: ""
	I0318 22:01:02.626358   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.626369   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:02.626376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:02.626440   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:02.664922   65622 cri.go:89] found id: ""
	I0318 22:01:02.664949   65622 logs.go:276] 0 containers: []
	W0318 22:01:02.664958   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:02.664969   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:02.664986   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:02.722853   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:02.722883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:02.740280   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:02.740305   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:02.819215   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:02.819232   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:02.819244   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:02.902355   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:02.902395   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:01.319955   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:03.324127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.299297   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:04.299721   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:02.694116   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.193971   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:05.452180   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:05.465921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:05.465981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:05.507224   65622 cri.go:89] found id: ""
	I0318 22:01:05.507245   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.507255   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:05.507262   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:05.507329   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:05.544705   65622 cri.go:89] found id: ""
	I0318 22:01:05.544737   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.544748   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:05.544754   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:05.544814   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:05.583552   65622 cri.go:89] found id: ""
	I0318 22:01:05.583580   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.583592   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:05.583600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:05.583668   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:05.620969   65622 cri.go:89] found id: ""
	I0318 22:01:05.620995   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.621002   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:05.621009   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:05.621054   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:05.662789   65622 cri.go:89] found id: ""
	I0318 22:01:05.662816   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.662827   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:05.662835   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:05.662900   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:05.701457   65622 cri.go:89] found id: ""
	I0318 22:01:05.701496   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.701506   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:05.701513   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:05.701566   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:05.742050   65622 cri.go:89] found id: ""
	I0318 22:01:05.742078   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.742088   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:05.742095   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:05.742162   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:05.782620   65622 cri.go:89] found id: ""
	I0318 22:01:05.782645   65622 logs.go:276] 0 containers: []
	W0318 22:01:05.782653   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:05.782661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:05.782672   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:05.875779   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:05.875815   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.927687   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:05.927711   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:05.979235   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:05.979264   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:05.997508   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:05.997536   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:06.073619   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:08.574277   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:08.588248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:08.588312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:08.626950   65622 cri.go:89] found id: ""
	I0318 22:01:08.626976   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.626987   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:08.626993   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:08.627050   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:08.670404   65622 cri.go:89] found id: ""
	I0318 22:01:08.670429   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.670436   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:08.670442   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:08.670505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:08.706036   65622 cri.go:89] found id: ""
	I0318 22:01:08.706063   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.706072   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:08.706079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:08.706134   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:08.743251   65622 cri.go:89] found id: ""
	I0318 22:01:08.743279   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.743290   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:08.743298   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:08.743361   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:08.782303   65622 cri.go:89] found id: ""
	I0318 22:01:08.782329   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.782340   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:08.782347   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:08.782413   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:08.827060   65622 cri.go:89] found id: ""
	I0318 22:01:08.827086   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.827095   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:08.827104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:08.827157   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:08.867098   65622 cri.go:89] found id: ""
	I0318 22:01:08.867126   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.867137   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:08.867145   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:08.867192   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:08.906283   65622 cri.go:89] found id: ""
	I0318 22:01:08.906314   65622 logs.go:276] 0 containers: []
	W0318 22:01:08.906323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:08.906334   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:08.906349   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:08.959145   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:08.959171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:08.976307   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:08.976336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:09.049255   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:09.049285   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:09.049300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:09.139458   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:09.139493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:05.821257   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.320779   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:06.799599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:08.800534   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.301906   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:07.195710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:09.691770   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.687215   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:11.701855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:11.701926   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:11.740185   65622 cri.go:89] found id: ""
	I0318 22:01:11.740213   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.740224   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:11.740231   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:11.740293   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:11.782083   65622 cri.go:89] found id: ""
	I0318 22:01:11.782110   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.782119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:11.782126   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:11.782187   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:11.830887   65622 cri.go:89] found id: ""
	I0318 22:01:11.830910   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.830920   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:11.830928   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:11.830981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:11.868585   65622 cri.go:89] found id: ""
	I0318 22:01:11.868607   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.868613   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:11.868618   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:11.868673   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:11.912298   65622 cri.go:89] found id: ""
	I0318 22:01:11.912324   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.912336   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:11.912343   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:11.912396   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:11.957511   65622 cri.go:89] found id: ""
	I0318 22:01:11.957536   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.957546   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:11.957553   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:11.957610   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:11.998894   65622 cri.go:89] found id: ""
	I0318 22:01:11.998916   65622 logs.go:276] 0 containers: []
	W0318 22:01:11.998927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:11.998934   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:11.998984   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:12.039419   65622 cri.go:89] found id: ""
	I0318 22:01:12.039446   65622 logs.go:276] 0 containers: []
	W0318 22:01:12.039458   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:12.039468   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:12.039484   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:12.094721   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:12.094750   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:12.110328   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:12.110351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:12.183351   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:12.183371   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:12.183385   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:12.260772   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:12.260812   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:14.806518   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:14.821701   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:14.821760   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:14.864280   65622 cri.go:89] found id: ""
	I0318 22:01:14.864307   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.864316   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:14.864322   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:14.864380   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:14.913041   65622 cri.go:89] found id: ""
	I0318 22:01:14.913071   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.913083   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:14.913091   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:14.913155   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:14.951563   65622 cri.go:89] found id: ""
	I0318 22:01:14.951586   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.951594   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:14.951600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:14.951651   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:10.321379   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:12.321708   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.324578   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:13.303344   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:15.799107   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:11.692795   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.192711   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:16.192974   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:14.993070   65622 cri.go:89] found id: ""
	I0318 22:01:14.993103   65622 logs.go:276] 0 containers: []
	W0318 22:01:14.993114   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:14.993122   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:14.993182   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:15.033552   65622 cri.go:89] found id: ""
	I0318 22:01:15.033580   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.033591   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:15.033600   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:15.033660   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:15.075982   65622 cri.go:89] found id: ""
	I0318 22:01:15.076009   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.076020   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:15.076031   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:15.076090   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:15.118757   65622 cri.go:89] found id: ""
	I0318 22:01:15.118784   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.118795   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:15.118801   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:15.118844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:15.160333   65622 cri.go:89] found id: ""
	I0318 22:01:15.160355   65622 logs.go:276] 0 containers: []
	W0318 22:01:15.160366   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:15.160374   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:15.160387   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:15.239607   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:15.239635   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:15.239653   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:15.324254   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:15.324285   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:15.370722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:15.370754   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:15.423268   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:15.423297   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:17.940107   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:17.954692   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:17.954749   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:18.001810   65622 cri.go:89] found id: ""
	I0318 22:01:18.001831   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.001838   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:18.001844   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:18.001903   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:18.042871   65622 cri.go:89] found id: ""
	I0318 22:01:18.042897   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.042909   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:18.042916   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:18.042975   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:18.083933   65622 cri.go:89] found id: ""
	I0318 22:01:18.083956   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.083964   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:18.083969   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:18.084019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:18.125590   65622 cri.go:89] found id: ""
	I0318 22:01:18.125617   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.125628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:18.125636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:18.125697   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:18.166696   65622 cri.go:89] found id: ""
	I0318 22:01:18.166727   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.166737   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:18.166745   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:18.166806   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:18.211273   65622 cri.go:89] found id: ""
	I0318 22:01:18.211297   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.211308   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:18.211315   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:18.211382   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:18.251821   65622 cri.go:89] found id: ""
	I0318 22:01:18.251844   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.251851   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:18.251860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:18.251918   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:18.290507   65622 cri.go:89] found id: ""
	I0318 22:01:18.290531   65622 logs.go:276] 0 containers: []
	W0318 22:01:18.290541   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:18.290552   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:18.290568   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:18.349013   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:18.349041   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:18.366082   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:18.366113   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:18.441742   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:18.441766   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:18.441780   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:18.535299   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:18.535335   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:16.820809   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.820856   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:17.800874   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.301479   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:18.691838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:20.692582   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:21.077652   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:21.092980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:21.093039   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:21.132742   65622 cri.go:89] found id: ""
	I0318 22:01:21.132762   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.132770   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:21.132776   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:21.132833   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:21.170814   65622 cri.go:89] found id: ""
	I0318 22:01:21.170836   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.170844   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:21.170849   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:21.170911   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:21.212812   65622 cri.go:89] found id: ""
	I0318 22:01:21.212845   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.212853   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:21.212860   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:21.212924   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:21.254010   65622 cri.go:89] found id: ""
	I0318 22:01:21.254036   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.254044   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:21.254052   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:21.254095   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:21.292032   65622 cri.go:89] found id: ""
	I0318 22:01:21.292061   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.292073   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:21.292083   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:21.292152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:21.336946   65622 cri.go:89] found id: ""
	I0318 22:01:21.336975   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.336985   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:21.336992   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:21.337043   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:21.380295   65622 cri.go:89] found id: ""
	I0318 22:01:21.380319   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.380328   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:21.380336   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:21.380399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:21.417674   65622 cri.go:89] found id: ""
	I0318 22:01:21.417701   65622 logs.go:276] 0 containers: []
	W0318 22:01:21.417708   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:21.417717   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:21.417728   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:21.470782   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:21.470808   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:21.486015   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:21.486036   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:21.560654   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:21.560682   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:21.560699   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:21.644108   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:21.644146   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:24.190787   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:24.205695   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:24.205761   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:24.262577   65622 cri.go:89] found id: ""
	I0318 22:01:24.262602   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.262610   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:24.262615   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:24.262680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:24.304807   65622 cri.go:89] found id: ""
	I0318 22:01:24.304835   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.304845   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:24.304853   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:24.304933   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:24.345595   65622 cri.go:89] found id: ""
	I0318 22:01:24.345670   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.345688   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:24.345696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:24.345762   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:24.388471   65622 cri.go:89] found id: ""
	I0318 22:01:24.388498   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.388508   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:24.388515   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:24.388573   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:24.429610   65622 cri.go:89] found id: ""
	I0318 22:01:24.429641   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.429653   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:24.429663   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:24.429728   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:24.469661   65622 cri.go:89] found id: ""
	I0318 22:01:24.469683   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.469690   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:24.469696   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:24.469740   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:24.508086   65622 cri.go:89] found id: ""
	I0318 22:01:24.508115   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.508126   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:24.508133   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:24.508195   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:24.548963   65622 cri.go:89] found id: ""
	I0318 22:01:24.548988   65622 logs.go:276] 0 containers: []
	W0318 22:01:24.548998   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:24.549009   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:24.549028   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:24.603983   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:24.604012   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:24.620185   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:24.620207   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:24.699677   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:24.699699   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:24.699713   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:24.778830   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:24.778884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:20.821237   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.320180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:22.302559   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:24.800442   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:23.193491   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:25.692671   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.334749   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:27.349132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:27.349188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:27.394163   65622 cri.go:89] found id: ""
	I0318 22:01:27.394190   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.394197   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:27.394203   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:27.394259   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:27.435176   65622 cri.go:89] found id: ""
	I0318 22:01:27.435198   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.435207   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:27.435215   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:27.435273   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:27.475388   65622 cri.go:89] found id: ""
	I0318 22:01:27.475414   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.475422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:27.475427   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:27.475474   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:27.516225   65622 cri.go:89] found id: ""
	I0318 22:01:27.516247   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.516255   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:27.516265   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:27.516321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:27.554423   65622 cri.go:89] found id: ""
	I0318 22:01:27.554451   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.554459   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:27.554465   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:27.554518   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:27.592315   65622 cri.go:89] found id: ""
	I0318 22:01:27.592342   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.592352   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:27.592360   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:27.592418   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:27.634820   65622 cri.go:89] found id: ""
	I0318 22:01:27.634842   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.634849   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:27.634855   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:27.634912   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:27.673677   65622 cri.go:89] found id: ""
	I0318 22:01:27.673703   65622 logs.go:276] 0 containers: []
	W0318 22:01:27.673713   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:27.673724   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:27.673738   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:27.728342   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:27.728370   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:27.745465   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:27.745493   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:27.817800   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:27.817822   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:27.817836   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:27.905115   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:27.905152   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:25.322575   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.323097   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.821127   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.302001   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:29.799369   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:27.693253   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.192347   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:30.450454   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:30.464916   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:30.464969   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:30.504399   65622 cri.go:89] found id: ""
	I0318 22:01:30.504432   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.504443   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:30.504452   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:30.504505   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:30.543216   65622 cri.go:89] found id: ""
	I0318 22:01:30.543240   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.543248   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:30.543254   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:30.543310   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:30.581415   65622 cri.go:89] found id: ""
	I0318 22:01:30.581440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.581451   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:30.581459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:30.581515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:30.620419   65622 cri.go:89] found id: ""
	I0318 22:01:30.620440   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.620447   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:30.620453   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:30.620495   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:30.671859   65622 cri.go:89] found id: ""
	I0318 22:01:30.671886   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.671893   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:30.671899   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:30.671955   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:30.732705   65622 cri.go:89] found id: ""
	I0318 22:01:30.732732   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.732742   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:30.732750   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:30.732811   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:30.793811   65622 cri.go:89] found id: ""
	I0318 22:01:30.793839   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.793850   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:30.793856   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:30.793915   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:30.851516   65622 cri.go:89] found id: ""
	I0318 22:01:30.851539   65622 logs.go:276] 0 containers: []
	W0318 22:01:30.851546   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:30.851555   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:30.851566   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:30.907463   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:30.907496   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:30.924254   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:30.924286   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:31.002155   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:31.002177   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:31.002193   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:31.085486   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:31.085515   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:33.627379   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:33.641314   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:33.641378   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:33.683093   65622 cri.go:89] found id: ""
	I0318 22:01:33.683119   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.683129   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:33.683136   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:33.683193   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:33.724006   65622 cri.go:89] found id: ""
	I0318 22:01:33.724034   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.724042   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:33.724048   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:33.724091   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:33.761196   65622 cri.go:89] found id: ""
	I0318 22:01:33.761224   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.761240   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:33.761248   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:33.761306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:33.800636   65622 cri.go:89] found id: ""
	I0318 22:01:33.800661   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.800670   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:33.800676   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:33.800733   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:33.839423   65622 cri.go:89] found id: ""
	I0318 22:01:33.839450   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.839458   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:33.839464   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:33.839508   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:33.883076   65622 cri.go:89] found id: ""
	I0318 22:01:33.883102   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.883112   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:33.883118   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:33.883174   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:33.921886   65622 cri.go:89] found id: ""
	I0318 22:01:33.921909   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.921920   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:33.921926   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:33.921981   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:33.964632   65622 cri.go:89] found id: ""
	I0318 22:01:33.964659   65622 logs.go:276] 0 containers: []
	W0318 22:01:33.964670   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:33.964680   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:33.964700   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:34.043708   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:34.043731   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:34.043743   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:34.129150   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:34.129178   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:34.176067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:34.176089   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:34.231399   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:34.231433   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:32.324221   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.821547   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.301599   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.798017   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:32.692835   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:34.693519   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.747929   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:36.761803   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:36.761859   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:36.806407   65622 cri.go:89] found id: ""
	I0318 22:01:36.806434   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.806441   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:36.806447   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:36.806498   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:36.849046   65622 cri.go:89] found id: ""
	I0318 22:01:36.849073   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.849084   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:36.849092   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:36.849152   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:36.889880   65622 cri.go:89] found id: ""
	I0318 22:01:36.889910   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.889922   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:36.889929   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:36.889995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:36.936012   65622 cri.go:89] found id: ""
	I0318 22:01:36.936033   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.936041   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:36.936046   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:36.936094   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:36.977538   65622 cri.go:89] found id: ""
	I0318 22:01:36.977568   65622 logs.go:276] 0 containers: []
	W0318 22:01:36.977578   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:36.977587   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:36.977647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:37.014843   65622 cri.go:89] found id: ""
	I0318 22:01:37.014870   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.014881   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:37.014888   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:37.014956   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:37.055058   65622 cri.go:89] found id: ""
	I0318 22:01:37.055086   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.055097   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:37.055104   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:37.055167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:37.100605   65622 cri.go:89] found id: ""
	I0318 22:01:37.100633   65622 logs.go:276] 0 containers: []
	W0318 22:01:37.100642   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:37.100652   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:37.100666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:37.181840   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:37.181874   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:37.232689   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:37.232721   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:37.287264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:37.287294   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:37.305614   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:37.305638   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:37.389196   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:39.889461   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:39.904409   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:39.904472   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:39.944610   65622 cri.go:89] found id: ""
	I0318 22:01:39.944633   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.944641   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:39.944647   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:39.944701   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:37.323580   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.325038   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.798108   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:38.799072   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:40.799797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:36.694495   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.192489   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:41.193100   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:39.984337   65622 cri.go:89] found id: ""
	I0318 22:01:39.984360   65622 logs.go:276] 0 containers: []
	W0318 22:01:39.984367   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:39.984373   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:39.984427   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:40.026238   65622 cri.go:89] found id: ""
	I0318 22:01:40.026264   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.026276   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:40.026282   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:40.026338   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:40.075591   65622 cri.go:89] found id: ""
	I0318 22:01:40.075619   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.075628   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:40.075636   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:40.075686   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:40.126829   65622 cri.go:89] found id: ""
	I0318 22:01:40.126859   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.126871   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:40.126880   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:40.126941   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:40.167695   65622 cri.go:89] found id: ""
	I0318 22:01:40.167724   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.167735   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:40.167744   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:40.167802   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:40.205545   65622 cri.go:89] found id: ""
	I0318 22:01:40.205570   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.205582   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:40.205589   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:40.205636   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:40.245521   65622 cri.go:89] found id: ""
	I0318 22:01:40.245547   65622 logs.go:276] 0 containers: []
	W0318 22:01:40.245556   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:40.245567   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:40.245583   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:40.306315   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:40.306348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:40.324996   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:40.325021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:40.406484   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:40.406513   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:40.406526   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:40.492294   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:40.492323   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:43.034812   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:43.049661   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:43.049727   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:43.089419   65622 cri.go:89] found id: ""
	I0318 22:01:43.089444   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.089453   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:43.089461   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:43.089515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:43.130350   65622 cri.go:89] found id: ""
	I0318 22:01:43.130384   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.130394   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:43.130401   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:43.130462   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:43.171480   65622 cri.go:89] found id: ""
	I0318 22:01:43.171506   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.171515   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:43.171522   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:43.171567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:43.210215   65622 cri.go:89] found id: ""
	I0318 22:01:43.210240   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.210249   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:43.210258   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:43.210312   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:43.247024   65622 cri.go:89] found id: ""
	I0318 22:01:43.247049   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.247056   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:43.247063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:43.247113   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:43.283614   65622 cri.go:89] found id: ""
	I0318 22:01:43.283640   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.283651   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:43.283659   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:43.283716   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:43.327442   65622 cri.go:89] found id: ""
	I0318 22:01:43.327468   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.327478   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:43.327486   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:43.327544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:43.365732   65622 cri.go:89] found id: ""
	I0318 22:01:43.365760   65622 logs.go:276] 0 containers: []
	W0318 22:01:43.365769   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:43.365780   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:43.365793   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:43.425359   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:43.425396   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:43.442136   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:43.442161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:43.519737   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:43.519762   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:43.519777   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:43.602933   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:43.602972   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:41.821043   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:44.322040   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:42.802267   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.301098   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:43.692766   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:45.693595   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:46.146009   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:46.161266   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:46.161333   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:46.203056   65622 cri.go:89] found id: ""
	I0318 22:01:46.203082   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.203094   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:46.203101   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:46.203159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:46.245954   65622 cri.go:89] found id: ""
	I0318 22:01:46.245981   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.245991   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:46.245998   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:46.246069   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:46.282395   65622 cri.go:89] found id: ""
	I0318 22:01:46.282420   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.282431   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:46.282438   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:46.282497   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:46.322036   65622 cri.go:89] found id: ""
	I0318 22:01:46.322061   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.322072   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:46.322079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:46.322136   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:46.360951   65622 cri.go:89] found id: ""
	I0318 22:01:46.360973   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.360981   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:46.360987   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:46.361049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:46.399334   65622 cri.go:89] found id: ""
	I0318 22:01:46.399364   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.399382   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:46.399391   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:46.399450   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:46.443891   65622 cri.go:89] found id: ""
	I0318 22:01:46.443922   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.443933   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:46.443940   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:46.443990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:46.483047   65622 cri.go:89] found id: ""
	I0318 22:01:46.483088   65622 logs.go:276] 0 containers: []
	W0318 22:01:46.483099   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:46.483110   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:46.483124   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:46.542995   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:46.543026   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.559582   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:46.559605   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:46.637046   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:46.637065   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:46.637076   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:46.719628   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:46.719657   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.263990   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:49.278403   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:49.278469   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:49.322980   65622 cri.go:89] found id: ""
	I0318 22:01:49.323003   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.323014   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:49.323021   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:49.323077   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:49.360100   65622 cri.go:89] found id: ""
	I0318 22:01:49.360120   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.360127   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:49.360132   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:49.360180   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:49.402044   65622 cri.go:89] found id: ""
	I0318 22:01:49.402084   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.402095   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:49.402103   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:49.402164   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:49.442337   65622 cri.go:89] found id: ""
	I0318 22:01:49.442367   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.442391   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:49.442397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:49.442448   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:49.479079   65622 cri.go:89] found id: ""
	I0318 22:01:49.479111   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.479124   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:49.479132   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:49.479197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:49.526057   65622 cri.go:89] found id: ""
	I0318 22:01:49.526080   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.526090   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:49.526098   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:49.526159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:49.566720   65622 cri.go:89] found id: ""
	I0318 22:01:49.566747   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.566759   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:49.566767   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:49.566821   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:49.603120   65622 cri.go:89] found id: ""
	I0318 22:01:49.603142   65622 logs.go:276] 0 containers: []
	W0318 22:01:49.603152   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:49.603163   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:49.603180   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:49.677879   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:49.677904   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:49.677921   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:49.762904   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:49.762933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:49.809332   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:49.809358   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:49.861568   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:49.861599   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:46.322167   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.322495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:47.800006   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.298196   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:48.193259   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:50.195154   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.377996   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:52.396078   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:52.396159   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:52.435945   65622 cri.go:89] found id: ""
	I0318 22:01:52.435972   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.435980   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:52.435985   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:52.436034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:52.478723   65622 cri.go:89] found id: ""
	I0318 22:01:52.478754   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.478765   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:52.478772   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:52.478835   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:52.522240   65622 cri.go:89] found id: ""
	I0318 22:01:52.522267   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.522275   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:52.522281   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:52.522336   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:52.560168   65622 cri.go:89] found id: ""
	I0318 22:01:52.560195   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.560202   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:52.560208   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:52.560253   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:52.599730   65622 cri.go:89] found id: ""
	I0318 22:01:52.599752   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.599759   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:52.599765   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:52.599810   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:52.640357   65622 cri.go:89] found id: ""
	I0318 22:01:52.640386   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.640400   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:52.640407   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:52.640465   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:52.680925   65622 cri.go:89] found id: ""
	I0318 22:01:52.680954   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.680966   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:52.680972   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:52.681041   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:52.719537   65622 cri.go:89] found id: ""
	I0318 22:01:52.719561   65622 logs.go:276] 0 containers: []
	W0318 22:01:52.719570   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:52.719580   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:52.719597   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:52.773264   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:52.773292   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:52.788278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:52.788302   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:52.866674   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:52.866700   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:52.866714   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:52.952228   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:52.952263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:50.821598   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:53.321546   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.302659   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:54.799292   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:52.692794   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.192968   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:55.499710   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:55.514986   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:55.515049   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:55.561168   65622 cri.go:89] found id: ""
	I0318 22:01:55.561191   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.561198   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:55.561204   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:55.561252   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:55.606505   65622 cri.go:89] found id: ""
	I0318 22:01:55.606534   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.606545   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:55.606552   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:55.606613   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:55.648625   65622 cri.go:89] found id: ""
	I0318 22:01:55.648655   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.648665   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:55.648672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:55.648731   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:55.690878   65622 cri.go:89] found id: ""
	I0318 22:01:55.690903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.690914   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:55.690923   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:55.690987   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:55.729873   65622 cri.go:89] found id: ""
	I0318 22:01:55.729903   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.729914   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:55.729921   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:55.729982   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:55.767926   65622 cri.go:89] found id: ""
	I0318 22:01:55.767951   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.767959   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:55.767965   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:55.768025   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:55.809907   65622 cri.go:89] found id: ""
	I0318 22:01:55.809934   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.809942   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:55.809947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:55.810009   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:55.853992   65622 cri.go:89] found id: ""
	I0318 22:01:55.854023   65622 logs.go:276] 0 containers: []
	W0318 22:01:55.854032   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:55.854041   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:55.854060   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:55.932160   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:55.932185   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:55.932200   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:56.019976   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:56.020010   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:56.063901   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:56.063935   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:56.119282   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:56.119314   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:58.636555   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:01:58.651774   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:01:58.651851   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:01:58.697005   65622 cri.go:89] found id: ""
	I0318 22:01:58.697037   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.697047   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:01:58.697055   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:01:58.697128   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:01:58.742190   65622 cri.go:89] found id: ""
	I0318 22:01:58.742218   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.742229   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:01:58.742236   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:01:58.742297   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:01:58.779335   65622 cri.go:89] found id: ""
	I0318 22:01:58.779359   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.779378   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:01:58.779385   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:01:58.779445   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:01:58.818936   65622 cri.go:89] found id: ""
	I0318 22:01:58.818964   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.818972   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:01:58.818980   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:01:58.819034   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:01:58.856473   65622 cri.go:89] found id: ""
	I0318 22:01:58.856500   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.856511   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:01:58.856518   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:01:58.856579   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:01:58.897381   65622 cri.go:89] found id: ""
	I0318 22:01:58.897412   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.897423   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:01:58.897432   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:01:58.897503   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:01:58.938179   65622 cri.go:89] found id: ""
	I0318 22:01:58.938209   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.938221   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:01:58.938228   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:01:58.938295   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:01:58.981021   65622 cri.go:89] found id: ""
	I0318 22:01:58.981049   65622 logs.go:276] 0 containers: []
	W0318 22:01:58.981059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:01:58.981067   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:01:58.981081   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:01:59.054749   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:01:59.054779   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:01:59.070160   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:01:59.070188   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:01:59.150369   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:01:59.150385   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:01:59.150398   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:01:59.238341   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:01:59.238381   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:01:55.821471   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.822495   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.299408   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.299964   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:57.193704   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:01:59.194959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.790139   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:01.807948   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:01.808006   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:01.855198   65622 cri.go:89] found id: ""
	I0318 22:02:01.855224   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.855231   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:01.855238   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:01.855291   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:01.895292   65622 cri.go:89] found id: ""
	I0318 22:02:01.895313   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.895321   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:01.895326   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:01.895381   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:01.934102   65622 cri.go:89] found id: ""
	I0318 22:02:01.934127   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.934139   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:01.934146   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:01.934196   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:01.975676   65622 cri.go:89] found id: ""
	I0318 22:02:01.975704   65622 logs.go:276] 0 containers: []
	W0318 22:02:01.975715   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:01.975723   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:01.975789   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:02.015656   65622 cri.go:89] found id: ""
	I0318 22:02:02.015691   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.015701   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:02.015710   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:02.015771   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:02.058634   65622 cri.go:89] found id: ""
	I0318 22:02:02.058658   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.058666   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:02.058672   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:02.058719   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:02.096655   65622 cri.go:89] found id: ""
	I0318 22:02:02.096681   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.096692   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:02.096700   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:02.096767   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:02.137485   65622 cri.go:89] found id: ""
	I0318 22:02:02.137510   65622 logs.go:276] 0 containers: []
	W0318 22:02:02.137519   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:02.137527   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:02.137543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:02.221269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:02.221304   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:02.265816   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:02.265846   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:02.321554   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:02.321592   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:02.338503   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:02.338530   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:02.431779   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:04.932229   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:04.948859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:04.948931   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:00.321126   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:02.321899   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.798818   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:03.800605   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:05.801459   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:01.693520   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.192449   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:06.192843   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:04.995353   65622 cri.go:89] found id: ""
	I0318 22:02:04.995379   65622 logs.go:276] 0 containers: []
	W0318 22:02:04.995386   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:04.995392   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:04.995438   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:05.034886   65622 cri.go:89] found id: ""
	I0318 22:02:05.034911   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.034922   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:05.034929   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:05.034995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:05.076635   65622 cri.go:89] found id: ""
	I0318 22:02:05.076663   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.076673   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:05.076681   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:05.076742   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:05.119481   65622 cri.go:89] found id: ""
	I0318 22:02:05.119506   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.119514   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:05.119520   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:05.119571   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:05.162331   65622 cri.go:89] found id: ""
	I0318 22:02:05.162354   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.162369   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:05.162376   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:05.162428   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:05.206038   65622 cri.go:89] found id: ""
	I0318 22:02:05.206066   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.206076   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:05.206084   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:05.206142   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:05.251273   65622 cri.go:89] found id: ""
	I0318 22:02:05.251298   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.251309   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:05.251316   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:05.251375   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:05.292855   65622 cri.go:89] found id: ""
	I0318 22:02:05.292882   65622 logs.go:276] 0 containers: []
	W0318 22:02:05.292892   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:05.292917   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:05.292933   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:05.310330   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:05.310354   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:05.384915   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:05.384938   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:05.384957   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:05.472147   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:05.472182   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:05.544328   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:05.544351   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:08.101241   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:08.117397   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:08.117515   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:08.160011   65622 cri.go:89] found id: ""
	I0318 22:02:08.160035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.160043   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:08.160048   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:08.160100   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:08.202826   65622 cri.go:89] found id: ""
	I0318 22:02:08.202849   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.202860   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:08.202867   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:08.202935   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:08.241743   65622 cri.go:89] found id: ""
	I0318 22:02:08.241780   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.241792   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:08.241800   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:08.241864   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:08.280725   65622 cri.go:89] found id: ""
	I0318 22:02:08.280758   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.280769   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:08.280777   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:08.280840   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:08.324015   65622 cri.go:89] found id: ""
	I0318 22:02:08.324035   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.324041   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:08.324047   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:08.324104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:08.367332   65622 cri.go:89] found id: ""
	I0318 22:02:08.367356   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.367368   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:08.367375   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:08.367433   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:08.407042   65622 cri.go:89] found id: ""
	I0318 22:02:08.407066   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.407073   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:08.407079   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:08.407126   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:08.443800   65622 cri.go:89] found id: ""
	I0318 22:02:08.443820   65622 logs.go:276] 0 containers: []
	W0318 22:02:08.443827   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:08.443836   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:08.443850   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:08.459139   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:08.459172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:08.534893   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:08.534918   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:08.534934   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:08.627283   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:08.627322   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:08.672928   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:08.672967   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:06.821775   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:09.322004   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.299572   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:10.799620   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:08.693106   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.192341   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:11.230296   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:11.248814   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:11.248891   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:11.297030   65622 cri.go:89] found id: ""
	I0318 22:02:11.297056   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.297065   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:11.297072   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:11.297133   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:11.348811   65622 cri.go:89] found id: ""
	I0318 22:02:11.348837   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.348847   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:11.348854   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:11.348939   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:11.412137   65622 cri.go:89] found id: ""
	I0318 22:02:11.412161   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.412168   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:11.412174   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:11.412231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:11.452098   65622 cri.go:89] found id: ""
	I0318 22:02:11.452128   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.452139   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:11.452147   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:11.452207   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:11.492477   65622 cri.go:89] found id: ""
	I0318 22:02:11.492509   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.492519   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:11.492527   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:11.492588   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:11.532208   65622 cri.go:89] found id: ""
	I0318 22:02:11.532234   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.532244   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:11.532252   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:11.532306   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:11.570515   65622 cri.go:89] found id: ""
	I0318 22:02:11.570545   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.570556   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:11.570563   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:11.570633   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:11.613031   65622 cri.go:89] found id: ""
	I0318 22:02:11.613052   65622 logs.go:276] 0 containers: []
	W0318 22:02:11.613069   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:11.613079   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:11.613098   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:11.672019   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:11.672048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:11.687528   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:11.687550   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:11.761149   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:11.761172   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:11.761187   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.847273   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:11.847311   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:14.393016   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:14.409657   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:14.409732   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:14.451669   65622 cri.go:89] found id: ""
	I0318 22:02:14.451697   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.451711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:14.451717   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:14.451763   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:14.503383   65622 cri.go:89] found id: ""
	I0318 22:02:14.503408   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.503419   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:14.503427   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:14.503491   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:14.543027   65622 cri.go:89] found id: ""
	I0318 22:02:14.543048   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.543056   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:14.543061   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:14.543104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:14.583615   65622 cri.go:89] found id: ""
	I0318 22:02:14.583639   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.583649   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:14.583656   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:14.583713   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:14.621176   65622 cri.go:89] found id: ""
	I0318 22:02:14.621206   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.621217   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:14.621225   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:14.621283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:14.659419   65622 cri.go:89] found id: ""
	I0318 22:02:14.659440   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.659448   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:14.659454   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:14.659499   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:14.699307   65622 cri.go:89] found id: ""
	I0318 22:02:14.699337   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.699347   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:14.699354   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:14.699416   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:14.737379   65622 cri.go:89] found id: ""
	I0318 22:02:14.737406   65622 logs.go:276] 0 containers: []
	W0318 22:02:14.737414   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:14.737421   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:14.737432   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:14.793912   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:14.793939   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:14.809577   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:14.809604   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:14.898740   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:14.898767   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:14.898782   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:11.821139   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.821610   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.299590   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.303956   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:13.692089   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:15.693750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:14.981009   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:14.981038   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.526944   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:17.543437   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:17.543488   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:17.585722   65622 cri.go:89] found id: ""
	I0318 22:02:17.585747   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.585757   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:17.585765   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:17.585820   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:17.623603   65622 cri.go:89] found id: ""
	I0318 22:02:17.623632   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.623642   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:17.623650   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:17.623712   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:17.666086   65622 cri.go:89] found id: ""
	I0318 22:02:17.666113   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.666122   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:17.666130   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:17.666188   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:17.714403   65622 cri.go:89] found id: ""
	I0318 22:02:17.714430   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.714440   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:17.714448   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:17.714527   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:17.753174   65622 cri.go:89] found id: ""
	I0318 22:02:17.753199   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.753206   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:17.753212   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:17.753270   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:17.794962   65622 cri.go:89] found id: ""
	I0318 22:02:17.794992   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.795002   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:17.795010   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:17.795068   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:17.835446   65622 cri.go:89] found id: ""
	I0318 22:02:17.835469   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.835477   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:17.835482   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:17.835529   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:17.872243   65622 cri.go:89] found id: ""
	I0318 22:02:17.872271   65622 logs.go:276] 0 containers: []
	W0318 22:02:17.872279   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:17.872287   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:17.872299   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:17.915485   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:17.915520   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:17.969133   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:17.969161   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:17.984278   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:17.984300   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:18.055851   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:18.055871   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:18.055884   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:16.320827   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:18.321654   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.800563   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.300888   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:17.694101   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.191376   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:20.646312   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:20.660153   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:20.660220   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:20.704341   65622 cri.go:89] found id: ""
	I0318 22:02:20.704365   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.704376   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:20.704388   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:20.704443   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:20.747673   65622 cri.go:89] found id: ""
	I0318 22:02:20.747694   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.747702   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:20.747708   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:20.747753   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:20.787547   65622 cri.go:89] found id: ""
	I0318 22:02:20.787574   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.787585   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:20.787593   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:20.787694   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:20.830416   65622 cri.go:89] found id: ""
	I0318 22:02:20.830450   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.830461   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:20.830469   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:20.830531   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:20.871867   65622 cri.go:89] found id: ""
	I0318 22:02:20.871899   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.871912   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:20.871919   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:20.871980   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:20.915574   65622 cri.go:89] found id: ""
	I0318 22:02:20.915602   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.915614   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:20.915622   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:20.915680   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:20.956277   65622 cri.go:89] found id: ""
	I0318 22:02:20.956313   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.956322   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:20.956329   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:20.956399   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:20.997686   65622 cri.go:89] found id: ""
	I0318 22:02:20.997715   65622 logs.go:276] 0 containers: []
	W0318 22:02:20.997723   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:20.997732   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:20.997745   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:21.015019   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:21.015048   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:21.092090   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:21.092117   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:21.092133   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:21.169118   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:21.169149   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:21.215267   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:21.215298   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:23.769587   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:23.784063   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:23.784119   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:23.825704   65622 cri.go:89] found id: ""
	I0318 22:02:23.825726   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.825733   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:23.825740   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:23.825795   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:23.871536   65622 cri.go:89] found id: ""
	I0318 22:02:23.871561   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.871579   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:23.871586   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:23.871647   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:23.911388   65622 cri.go:89] found id: ""
	I0318 22:02:23.911415   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.911422   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:23.911428   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:23.911478   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:23.956649   65622 cri.go:89] found id: ""
	I0318 22:02:23.956671   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.956679   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:23.956687   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:23.956755   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:23.999368   65622 cri.go:89] found id: ""
	I0318 22:02:23.999395   65622 logs.go:276] 0 containers: []
	W0318 22:02:23.999405   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:23.999413   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:23.999471   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:24.039075   65622 cri.go:89] found id: ""
	I0318 22:02:24.039105   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.039118   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:24.039124   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:24.039186   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:24.079473   65622 cri.go:89] found id: ""
	I0318 22:02:24.079502   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.079513   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:24.079521   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:24.079587   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:24.118019   65622 cri.go:89] found id: ""
	I0318 22:02:24.118048   65622 logs.go:276] 0 containers: []
	W0318 22:02:24.118059   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:24.118069   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:24.118085   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:24.174530   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:24.174562   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:24.191685   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:24.191724   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:24.282133   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:24.282158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:24.282172   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:24.366181   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:24.366228   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:20.322586   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.820488   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.820555   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.798797   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.799501   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:22.192760   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:24.193279   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.912982   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:26.927364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:26.927425   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:26.968236   65622 cri.go:89] found id: ""
	I0318 22:02:26.968259   65622 logs.go:276] 0 containers: []
	W0318 22:02:26.968267   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:26.968272   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:26.968339   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:27.008226   65622 cri.go:89] found id: ""
	I0318 22:02:27.008251   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.008261   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:27.008267   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:27.008321   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:27.047742   65622 cri.go:89] found id: ""
	I0318 22:02:27.047767   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.047777   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:27.047784   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:27.047844   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:27.090692   65622 cri.go:89] found id: ""
	I0318 22:02:27.090722   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.090734   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:27.090741   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:27.090797   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:27.126596   65622 cri.go:89] found id: ""
	I0318 22:02:27.126621   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.126629   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:27.126635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:27.126684   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:27.162492   65622 cri.go:89] found id: ""
	I0318 22:02:27.162521   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.162530   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:27.162535   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:27.162583   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:27.203480   65622 cri.go:89] found id: ""
	I0318 22:02:27.203504   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.203517   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:27.203524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:27.203598   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:27.247140   65622 cri.go:89] found id: ""
	I0318 22:02:27.247162   65622 logs.go:276] 0 containers: []
	W0318 22:02:27.247172   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:27.247182   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:27.247198   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:27.328507   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:27.328529   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:27.328543   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:27.409269   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:27.409303   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:27.459615   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:27.459647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:27.512980   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:27.513014   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:26.821222   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.321682   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:27.302631   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.799175   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:26.693239   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:29.192207   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.193072   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:30.030021   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:30.045235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:30.045288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:30.092857   65622 cri.go:89] found id: ""
	I0318 22:02:30.092896   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.092919   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:30.092927   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:30.092977   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:30.133145   65622 cri.go:89] found id: ""
	I0318 22:02:30.133169   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.133176   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:30.133181   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:30.133244   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:30.179214   65622 cri.go:89] found id: ""
	I0318 22:02:30.179242   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.179252   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:30.179259   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:30.179323   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:30.221500   65622 cri.go:89] found id: ""
	I0318 22:02:30.221524   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.221533   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:30.221541   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:30.221585   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:30.262483   65622 cri.go:89] found id: ""
	I0318 22:02:30.262505   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.262516   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:30.262524   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:30.262584   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:30.308456   65622 cri.go:89] found id: ""
	I0318 22:02:30.308482   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.308493   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:30.308500   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:30.308544   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:30.346818   65622 cri.go:89] found id: ""
	I0318 22:02:30.346845   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.346853   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:30.346859   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:30.346914   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:30.387265   65622 cri.go:89] found id: ""
	I0318 22:02:30.387298   65622 logs.go:276] 0 containers: []
	W0318 22:02:30.387307   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:30.387317   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:30.387336   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:30.446382   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:30.446409   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:30.462305   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:30.462329   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:30.538560   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:30.538583   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:30.538598   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:30.622537   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:30.622571   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:33.172154   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:33.186477   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:33.186540   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:33.223436   65622 cri.go:89] found id: ""
	I0318 22:02:33.223464   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.223474   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:33.223481   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:33.223537   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:33.264785   65622 cri.go:89] found id: ""
	I0318 22:02:33.264810   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.264821   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:33.264829   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:33.264881   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:33.308014   65622 cri.go:89] found id: ""
	I0318 22:02:33.308035   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.308045   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:33.308055   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:33.308109   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:33.348188   65622 cri.go:89] found id: ""
	I0318 22:02:33.348215   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.348224   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:33.348231   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:33.348292   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:33.387905   65622 cri.go:89] found id: ""
	I0318 22:02:33.387935   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.387946   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:33.387954   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:33.388015   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:33.430915   65622 cri.go:89] found id: ""
	I0318 22:02:33.430944   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.430956   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:33.430964   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:33.431019   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:33.473103   65622 cri.go:89] found id: ""
	I0318 22:02:33.473128   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.473135   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:33.473140   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:33.473197   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:33.512960   65622 cri.go:89] found id: ""
	I0318 22:02:33.512992   65622 logs.go:276] 0 containers: []
	W0318 22:02:33.513003   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:33.513015   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:33.513029   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:33.569517   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:33.569554   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:33.585235   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:33.585263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:33.659494   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:33.659519   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:33.659538   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:33.749134   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:33.749181   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:31.820868   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.822075   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:31.802719   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:34.301730   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:33.692959   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.194871   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.306589   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:36.321602   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:36.321654   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:36.364047   65622 cri.go:89] found id: ""
	I0318 22:02:36.364068   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.364076   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:36.364083   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:36.364139   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:36.406084   65622 cri.go:89] found id: ""
	I0318 22:02:36.406111   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.406119   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:36.406125   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:36.406176   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:36.450861   65622 cri.go:89] found id: ""
	I0318 22:02:36.450887   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.450895   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:36.450900   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:36.450946   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:36.493979   65622 cri.go:89] found id: ""
	I0318 22:02:36.494006   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.494014   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:36.494020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:36.494079   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:36.539123   65622 cri.go:89] found id: ""
	I0318 22:02:36.539150   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.539160   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:36.539167   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:36.539233   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:36.577460   65622 cri.go:89] found id: ""
	I0318 22:02:36.577485   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.577495   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:36.577502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:36.577546   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:36.615276   65622 cri.go:89] found id: ""
	I0318 22:02:36.615300   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.615308   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:36.615313   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:36.615369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:36.652756   65622 cri.go:89] found id: ""
	I0318 22:02:36.652775   65622 logs.go:276] 0 containers: []
	W0318 22:02:36.652782   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:36.652790   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:36.652802   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:36.706253   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:36.706282   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:36.722032   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:36.722055   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:36.797758   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:36.797783   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:36.797799   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:36.875589   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:36.875622   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:39.422267   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:39.436967   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:39.437040   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:39.479916   65622 cri.go:89] found id: ""
	I0318 22:02:39.479941   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.479950   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:39.479956   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:39.480012   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:39.542890   65622 cri.go:89] found id: ""
	I0318 22:02:39.542920   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.542930   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:39.542937   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:39.542990   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:39.588200   65622 cri.go:89] found id: ""
	I0318 22:02:39.588225   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.588233   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:39.588239   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:39.588290   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:39.629014   65622 cri.go:89] found id: ""
	I0318 22:02:39.629036   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.629043   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:39.629049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:39.629105   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:39.675522   65622 cri.go:89] found id: ""
	I0318 22:02:39.675551   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.675561   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:39.675569   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:39.675629   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:39.722842   65622 cri.go:89] found id: ""
	I0318 22:02:39.722873   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.722883   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:39.722890   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:39.722951   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:39.760410   65622 cri.go:89] found id: ""
	I0318 22:02:39.760440   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.760451   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:39.760458   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:39.760519   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:39.799982   65622 cri.go:89] found id: ""
	I0318 22:02:39.800007   65622 logs.go:276] 0 containers: []
	W0318 22:02:39.800016   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:39.800027   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:39.800045   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:39.878784   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:39.878805   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:39.878821   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:39.965987   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:39.966021   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:36.320427   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.321178   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:36.799943   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:39.300691   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:38.699873   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.193658   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:40.015006   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:40.015040   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:40.068619   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:40.068648   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:42.586444   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:42.603310   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:42.603394   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:42.645260   65622 cri.go:89] found id: ""
	I0318 22:02:42.645288   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.645296   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:42.645301   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:42.645360   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:42.682004   65622 cri.go:89] found id: ""
	I0318 22:02:42.682029   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.682036   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:42.682042   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:42.682086   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:42.722886   65622 cri.go:89] found id: ""
	I0318 22:02:42.722922   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.722939   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:42.722947   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:42.723008   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:42.759183   65622 cri.go:89] found id: ""
	I0318 22:02:42.759208   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.759218   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:42.759224   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:42.759283   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:42.799292   65622 cri.go:89] found id: ""
	I0318 22:02:42.799316   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.799325   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:42.799337   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:42.799389   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:42.838821   65622 cri.go:89] found id: ""
	I0318 22:02:42.838848   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.838856   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:42.838861   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:42.838908   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:42.877889   65622 cri.go:89] found id: ""
	I0318 22:02:42.877917   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.877927   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:42.877935   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:42.877991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:42.921283   65622 cri.go:89] found id: ""
	I0318 22:02:42.921310   65622 logs.go:276] 0 containers: []
	W0318 22:02:42.921323   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:42.921334   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:42.921348   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:43.000405   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:43.000444   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:43.042091   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:43.042116   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:43.094030   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:43.094059   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:43.108612   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:43.108647   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:43.194388   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:40.321388   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:42.822538   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:41.799159   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.800027   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.299156   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:43.693317   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:46.194419   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:45.694881   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:45.709833   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:45.709897   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:45.749770   65622 cri.go:89] found id: ""
	I0318 22:02:45.749797   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.749806   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:45.749812   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:45.749866   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:45.794879   65622 cri.go:89] found id: ""
	I0318 22:02:45.794909   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.794920   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:45.794928   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:45.794988   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:45.841587   65622 cri.go:89] found id: ""
	I0318 22:02:45.841608   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.841618   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:45.841625   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:45.841725   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:45.884972   65622 cri.go:89] found id: ""
	I0318 22:02:45.885004   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.885015   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:45.885023   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:45.885084   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:45.936170   65622 cri.go:89] found id: ""
	I0318 22:02:45.936204   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.936215   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:45.936223   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:45.936286   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:45.993684   65622 cri.go:89] found id: ""
	I0318 22:02:45.993708   65622 logs.go:276] 0 containers: []
	W0318 22:02:45.993715   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:45.993720   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:45.993766   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:46.048422   65622 cri.go:89] found id: ""
	I0318 22:02:46.048445   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.048453   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:46.048459   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:46.048512   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:46.087173   65622 cri.go:89] found id: ""
	I0318 22:02:46.087197   65622 logs.go:276] 0 containers: []
	W0318 22:02:46.087206   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:46.087214   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:46.087227   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:46.168633   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:46.168661   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:46.168675   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:46.250797   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:46.250827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:46.302862   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:46.302883   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:46.358096   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:46.358125   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:48.874275   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:48.890166   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:48.890231   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:48.930832   65622 cri.go:89] found id: ""
	I0318 22:02:48.930861   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.930869   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:48.930875   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:48.930919   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:48.972784   65622 cri.go:89] found id: ""
	I0318 22:02:48.972809   65622 logs.go:276] 0 containers: []
	W0318 22:02:48.972819   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:48.972826   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:48.972884   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:49.011201   65622 cri.go:89] found id: ""
	I0318 22:02:49.011222   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.011229   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:49.011235   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:49.011277   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:49.050457   65622 cri.go:89] found id: ""
	I0318 22:02:49.050480   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.050496   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:49.050502   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:49.050565   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:49.087585   65622 cri.go:89] found id: ""
	I0318 22:02:49.087611   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.087621   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:49.087629   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:49.087687   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:49.126761   65622 cri.go:89] found id: ""
	I0318 22:02:49.126794   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.126805   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:49.126813   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:49.126874   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:49.166045   65622 cri.go:89] found id: ""
	I0318 22:02:49.166074   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.166085   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:49.166092   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:49.166147   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:49.205624   65622 cri.go:89] found id: ""
	I0318 22:02:49.205650   65622 logs.go:276] 0 containers: []
	W0318 22:02:49.205660   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:49.205670   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:49.205684   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:49.257864   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:49.257891   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:49.272581   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:49.272606   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:49.349960   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:49.349981   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:49.349996   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:49.438873   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:49.438916   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:45.322637   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:47.820481   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.300259   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.798429   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:48.693209   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:50.693611   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:51.984840   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:52.002378   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:52.002436   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:52.040871   65622 cri.go:89] found id: ""
	I0318 22:02:52.040890   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.040898   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:52.040917   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:52.040973   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:52.076062   65622 cri.go:89] found id: ""
	I0318 22:02:52.076083   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.076090   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:52.076096   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:52.076167   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:52.119597   65622 cri.go:89] found id: ""
	I0318 22:02:52.119621   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.119629   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:52.119635   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:52.119690   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:52.157892   65622 cri.go:89] found id: ""
	I0318 22:02:52.157919   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.157929   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:52.157936   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:52.157995   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:52.196738   65622 cri.go:89] found id: ""
	I0318 22:02:52.196760   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.196767   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:52.196772   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:52.196836   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:52.234012   65622 cri.go:89] found id: ""
	I0318 22:02:52.234036   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.234043   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:52.234049   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:52.234104   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:52.273720   65622 cri.go:89] found id: ""
	I0318 22:02:52.273750   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.273761   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:52.273769   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:52.273817   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:52.317495   65622 cri.go:89] found id: ""
	I0318 22:02:52.317525   65622 logs.go:276] 0 containers: []
	W0318 22:02:52.317535   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:52.317545   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:52.317619   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:52.371640   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:52.371666   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:52.387141   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:52.387165   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:52.469009   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:52.469035   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:52.469047   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:52.550848   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:52.550880   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:50.322017   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.820364   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:54.820692   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.799942   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.301665   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:52.694058   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.194171   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:55.096980   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:55.111353   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:55.111406   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:55.155832   65622 cri.go:89] found id: ""
	I0318 22:02:55.155857   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.155875   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:55.155882   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:55.155942   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:55.195477   65622 cri.go:89] found id: ""
	I0318 22:02:55.195499   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.195509   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:55.195516   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:55.195567   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:55.234536   65622 cri.go:89] found id: ""
	I0318 22:02:55.234564   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.234574   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:55.234582   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:55.234640   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:55.270955   65622 cri.go:89] found id: ""
	I0318 22:02:55.270977   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.270984   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:55.270989   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:55.271033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:55.308883   65622 cri.go:89] found id: ""
	I0318 22:02:55.308919   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.308930   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:55.308937   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:55.308985   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:55.355259   65622 cri.go:89] found id: ""
	I0318 22:02:55.355284   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.355294   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:55.355301   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:55.355364   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:55.392385   65622 cri.go:89] found id: ""
	I0318 22:02:55.392409   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.392417   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:55.392423   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:55.392466   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:55.433773   65622 cri.go:89] found id: ""
	I0318 22:02:55.433794   65622 logs.go:276] 0 containers: []
	W0318 22:02:55.433802   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:55.433810   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:55.433827   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:55.518513   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:55.518536   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:55.518553   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:55.602717   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:55.602751   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:55.652409   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:55.652436   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:55.707150   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:55.707175   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.223146   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:02:58.240213   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:02:58.240288   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:02:58.280676   65622 cri.go:89] found id: ""
	I0318 22:02:58.280702   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.280711   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:02:58.280719   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:02:58.280778   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:02:58.324490   65622 cri.go:89] found id: ""
	I0318 22:02:58.324515   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.324524   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:02:58.324531   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:02:58.324592   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:02:58.370256   65622 cri.go:89] found id: ""
	I0318 22:02:58.370288   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.370298   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:02:58.370309   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:02:58.370369   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:02:58.419969   65622 cri.go:89] found id: ""
	I0318 22:02:58.420002   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.420012   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:02:58.420020   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:02:58.420082   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:02:58.464916   65622 cri.go:89] found id: ""
	I0318 22:02:58.464942   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.464950   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:02:58.464956   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:02:58.465016   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:02:58.511388   65622 cri.go:89] found id: ""
	I0318 22:02:58.511415   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.511425   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:02:58.511433   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:02:58.511500   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:02:58.555314   65622 cri.go:89] found id: ""
	I0318 22:02:58.555344   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.555356   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:02:58.555364   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:02:58.555426   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:02:58.595200   65622 cri.go:89] found id: ""
	I0318 22:02:58.595229   65622 logs.go:276] 0 containers: []
	W0318 22:02:58.595239   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:02:58.595249   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:02:58.595263   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:02:58.642037   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:02:58.642069   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:02:58.700216   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:02:58.700247   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:02:58.715851   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:02:58.715882   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:02:58.792139   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:02:58.792158   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:02:58.792171   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:02:56.821255   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:58.828524   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.303516   65211 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:57.791851   65211 pod_ready.go:81] duration metric: took 4m0.000068811s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" ...
	E0318 22:02:57.791889   65211 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-vt7hj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:02:57.791913   65211 pod_ready.go:38] duration metric: took 4m13.55705031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:02:57.791938   65211 kubeadm.go:591] duration metric: took 4m20.862001116s to restartPrimaryControlPlane
	W0318 22:02:57.792000   65211 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:02:57.792027   65211 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:02:57.692975   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:02:59.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.395212   65622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:01.411364   65622 kubeadm.go:591] duration metric: took 4m3.302597324s to restartPrimaryControlPlane
	W0318 22:03:01.411442   65622 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:01.411474   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:02.800222   65622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.388721926s)
	I0318 22:03:02.800302   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:02.817517   65622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:02.832036   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:02.844307   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:02.844324   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:02.844381   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:02.857804   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:02.857882   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:02.871307   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:02.883191   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:02.883252   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:02.896457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.908089   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:02.908147   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:02.920327   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:02.932098   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:02.932158   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:02.944129   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:03.034197   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:03:03.034333   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:03.204271   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:03.204501   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:03.204645   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:03.415789   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:03.417688   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:03.417801   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:03.417902   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:03.418026   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:03.418129   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:03.418242   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:03.418324   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:03.418420   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:03.418502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:03.418614   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:03.418744   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:03.418823   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:03.418916   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:03.644844   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:03.912013   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:04.097560   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:04.222469   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:04.239066   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:04.250168   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:04.250225   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:04.399277   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:04.401154   65622 out.go:204]   - Booting up control plane ...
	I0318 22:03:04.401283   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:04.406500   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:04.407544   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:04.410177   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:04.418949   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:01.321045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:03.322008   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:01.694585   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:04.195750   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:05.322087   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:07.820940   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:09.822652   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:06.693803   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:08.693856   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:10.694375   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:12.321504   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:14.821435   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:13.192173   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:15.193816   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:16.822327   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.322059   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:17.691761   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:19.691867   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.322674   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.823374   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:21.692710   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:23.695045   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.192838   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:26.322370   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:28.820807   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.165008   65211 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.372946393s)
	I0318 22:03:30.165087   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:30.184259   65211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:03:30.198417   65211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:03:30.210595   65211 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:03:30.210624   65211 kubeadm.go:156] found existing configuration files:
	
	I0318 22:03:30.210675   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:03:30.222159   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:03:30.222210   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:03:30.234099   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:03:30.244546   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:03:30.244621   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:03:30.255192   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.265777   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:03:30.265833   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:03:30.276674   65211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:03:30.286349   65211 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:03:30.286402   65211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:03:30.296530   65211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:03:30.522414   65211 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:03:28.193120   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:30.194300   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:31.321986   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:33.823045   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:32.693115   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:34.693824   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.294937   65211 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:03:39.295015   65211 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:03:39.295142   65211 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:03:39.295296   65211 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:03:39.295451   65211 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:03:39.295550   65211 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:03:39.297047   65211 out.go:204]   - Generating certificates and keys ...
	I0318 22:03:39.297135   65211 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:03:39.297250   65211 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:03:39.297368   65211 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:03:39.297461   65211 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:03:39.297557   65211 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:03:39.297640   65211 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:03:39.297742   65211 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:03:39.297831   65211 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:03:39.297939   65211 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:03:39.298032   65211 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:03:39.298084   65211 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:03:39.298206   65211 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:03:39.298301   65211 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:03:39.298376   65211 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:03:39.298451   65211 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:03:39.298518   65211 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:03:39.298612   65211 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:03:39.298693   65211 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:03:39.299829   65211 out.go:204]   - Booting up control plane ...
	I0318 22:03:39.299959   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:03:39.300052   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:03:39.300150   65211 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:03:39.300308   65211 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:03:39.300444   65211 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:03:39.300496   65211 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:03:39.300713   65211 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:03:39.300829   65211 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003359 seconds
	I0318 22:03:39.300997   65211 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:03:39.301155   65211 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:03:39.301228   65211 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:03:39.301451   65211 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-141758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:03:39.301526   65211 kubeadm.go:309] [bootstrap-token] Using token: p114v6.erax4pf5xkn6x2it
	I0318 22:03:39.302903   65211 out.go:204]   - Configuring RBAC rules ...
	I0318 22:03:39.303025   65211 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:03:39.303133   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:03:39.303301   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:03:39.303479   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:03:39.303574   65211 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:03:39.303651   65211 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:03:39.303810   65211 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:03:39.303886   65211 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:03:39.303960   65211 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:03:39.303972   65211 kubeadm.go:309] 
	I0318 22:03:39.304041   65211 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:03:39.304050   65211 kubeadm.go:309] 
	I0318 22:03:39.304158   65211 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:03:39.304173   65211 kubeadm.go:309] 
	I0318 22:03:39.304208   65211 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:03:39.304292   65211 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:03:39.304368   65211 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:03:39.304377   65211 kubeadm.go:309] 
	I0318 22:03:39.304456   65211 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:03:39.304465   65211 kubeadm.go:309] 
	I0318 22:03:39.304547   65211 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:03:39.304570   65211 kubeadm.go:309] 
	I0318 22:03:39.304649   65211 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:03:39.304754   65211 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:03:39.304861   65211 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:03:39.304878   65211 kubeadm.go:309] 
	I0318 22:03:39.305028   65211 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:03:39.305134   65211 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:03:39.305144   65211 kubeadm.go:309] 
	I0318 22:03:39.305248   65211 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305390   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:03:39.305422   65211 kubeadm.go:309] 	--control-plane 
	I0318 22:03:39.305430   65211 kubeadm.go:309] 
	I0318 22:03:39.305545   65211 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:03:39.305556   65211 kubeadm.go:309] 
	I0318 22:03:39.305676   65211 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p114v6.erax4pf5xkn6x2it \
	I0318 22:03:39.305843   65211 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:03:39.305859   65211 cni.go:84] Creating CNI manager for ""
	I0318 22:03:39.305873   65211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:03:39.307416   65211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:03:36.323956   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:38.821180   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.308819   65211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:03:39.375416   65211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:03:39.434235   65211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:03:39.434303   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.434360   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-141758 minikube.k8s.io/updated_at=2024_03_18T22_03_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=embed-certs-141758 minikube.k8s.io/primary=true
	I0318 22:03:39.677778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:39.708540   65211 ops.go:34] apiserver oom_adj: -16
	I0318 22:03:40.178803   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:40.678832   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:37.193451   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:39.193667   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:44.419883   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:03:44.420568   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:44.420749   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:40.821359   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.323788   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:41.678334   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.177921   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:42.678115   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.178034   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:43.678655   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.177993   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:44.678581   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.177929   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:45.678124   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:46.178423   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:41.693587   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:43.693965   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.195060   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:49.421054   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:49.421381   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:45.821472   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:47.822362   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:46.678288   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.178394   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:47.678824   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.178142   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.678144   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.178090   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:49.678295   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.178829   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:50.677856   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:51.177778   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:48.197085   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:50.693056   65699 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:51.192418   65699 pod_ready.go:81] duration metric: took 4m0.006727095s for pod "metrics-server-57f55c9bc5-rdthh" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:51.192452   65699 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0318 22:03:51.192462   65699 pod_ready.go:38] duration metric: took 4m5.551753918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:51.192480   65699 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:51.192514   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:51.192574   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:51.248553   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:51.248575   65699 cri.go:89] found id: ""
	I0318 22:03:51.248583   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:51.248634   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.254205   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:51.254270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:51.303508   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.303534   65699 cri.go:89] found id: ""
	I0318 22:03:51.303543   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:51.303600   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.310160   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:51.310212   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:51.357409   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:51.357429   65699 cri.go:89] found id: ""
	I0318 22:03:51.357436   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:51.357480   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.362683   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:51.362744   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:51.413520   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.413550   65699 cri.go:89] found id: ""
	I0318 22:03:51.413560   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:51.413619   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.419412   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:51.419483   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:51.468338   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:51.468365   65699 cri.go:89] found id: ""
	I0318 22:03:51.468374   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:51.468432   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.474006   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:51.474070   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:51.520166   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:51.520188   65699 cri.go:89] found id: ""
	I0318 22:03:51.520195   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:51.520246   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.526087   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:51.526148   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:51.570735   65699 cri.go:89] found id: ""
	I0318 22:03:51.570761   65699 logs.go:276] 0 containers: []
	W0318 22:03:51.570772   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:51.570779   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:51.570832   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:51.678380   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.178543   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.677807   65211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:03:52.814739   65211 kubeadm.go:1107] duration metric: took 13.380493852s to wait for elevateKubeSystemPrivileges
	W0318 22:03:52.814773   65211 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:03:52.814782   65211 kubeadm.go:393] duration metric: took 5m15.94869953s to StartCluster
	I0318 22:03:52.814803   65211 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.814883   65211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:03:52.816928   65211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:03:52.817192   65211 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:03:52.818800   65211 out.go:177] * Verifying Kubernetes components...
	I0318 22:03:52.817486   65211 config.go:182] Loaded profile config "embed-certs-141758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:03:52.817499   65211 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:03:52.820175   65211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:03:52.818838   65211 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-141758"
	I0318 22:03:52.820277   65211 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-141758"
	W0318 22:03:52.820288   65211 addons.go:243] addon storage-provisioner should already be in state true
	I0318 22:03:52.818844   65211 addons.go:69] Setting metrics-server=true in profile "embed-certs-141758"
	I0318 22:03:52.820369   65211 addons.go:234] Setting addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:52.818848   65211 addons.go:69] Setting default-storageclass=true in profile "embed-certs-141758"
	I0318 22:03:52.820429   65211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-141758"
	I0318 22:03:52.820317   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	W0318 22:03:52.820386   65211 addons.go:243] addon metrics-server should already be in state true
	I0318 22:03:52.820697   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.820821   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820846   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.820872   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.820899   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.821079   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.821107   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.839829   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0318 22:03:52.839850   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0318 22:03:52.839992   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34809
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840448   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840413   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.840986   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841010   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841124   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841144   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841148   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.841162   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.841385   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841428   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841557   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.841639   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.842001   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842043   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.842049   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.842068   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.845295   65211 addons.go:234] Setting addon default-storageclass=true in "embed-certs-141758"
	W0318 22:03:52.845315   65211 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:03:52.845343   65211 host.go:66] Checking if "embed-certs-141758" exists ...
	I0318 22:03:52.845692   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.845736   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.864111   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0318 22:03:52.864141   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0318 22:03:52.864614   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.864688   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.865181   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865199   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865318   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.865334   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.865556   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866107   65211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:03:52.866147   65211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:03:52.866343   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.866630   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.868253   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.870076   65211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:03:52.871315   65211 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:52.871333   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:03:52.871352   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.873922   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0318 22:03:52.874420   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.874924   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.874944   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.875080   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.875194   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.875254   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.875346   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.875478   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.875718   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.875733   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.876060   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.876234   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.877582   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.879040   65211 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:03:50.320724   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.321791   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:54.821845   65170 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace has status "Ready":"False"
	I0318 22:03:52.880124   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:03:52.880135   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:03:52.880152   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.882530   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.882957   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.882979   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.883230   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.883371   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.883507   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.883638   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:52.886181   65211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0318 22:03:52.886563   65211 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:03:52.887043   65211 main.go:141] libmachine: Using API Version  1
	I0318 22:03:52.887064   65211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:03:52.887416   65211 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:03:52.887599   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetState
	I0318 22:03:52.888998   65211 main.go:141] libmachine: (embed-certs-141758) Calling .DriverName
	I0318 22:03:52.889490   65211 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:52.889504   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:03:52.889519   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHHostname
	I0318 22:03:52.891985   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892380   65211 main.go:141] libmachine: (embed-certs-141758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:20:63", ip: ""} in network mk-embed-certs-141758: {Iface:virbr3 ExpiryTime:2024-03-18 22:58:19 +0000 UTC Type:0 Mac:52:54:00:10:20:63 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:embed-certs-141758 Clientid:01:52:54:00:10:20:63}
	I0318 22:03:52.892435   65211 main.go:141] libmachine: (embed-certs-141758) DBG | domain embed-certs-141758 has defined IP address 192.168.39.243 and MAC address 52:54:00:10:20:63 in network mk-embed-certs-141758
	I0318 22:03:52.892633   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHPort
	I0318 22:03:52.892776   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHKeyPath
	I0318 22:03:52.892949   65211 main.go:141] libmachine: (embed-certs-141758) Calling .GetSSHUsername
	I0318 22:03:52.893066   65211 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/embed-certs-141758/id_rsa Username:docker}
	I0318 22:03:53.047557   65211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:03:53.098470   65211 node_ready.go:35] waiting up to 6m0s for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111074   65211 node_ready.go:49] node "embed-certs-141758" has status "Ready":"True"
	I0318 22:03:53.111093   65211 node_ready.go:38] duration metric: took 12.593803ms for node "embed-certs-141758" to be "Ready" ...
	I0318 22:03:53.111102   65211 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:53.127297   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:53.167460   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:03:53.167476   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:03:53.199789   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:03:53.221070   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:03:53.233431   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:03:53.233452   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:03:53.298339   65211 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:53.298368   65211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:03:53.415046   65211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:03:55.057164   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.85734001s)
	I0318 22:03:55.057233   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057252   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057553   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.057590   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057601   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.057614   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.057634   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.057888   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.057929   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.064097   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.064111   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.064376   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.064402   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.064418   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.138948   65211 pod_ready.go:92] pod "coredns-5dd5756b68-k675p" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.138968   65211 pod_ready.go:81] duration metric: took 2.011647544s for pod "coredns-5dd5756b68-k675p" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.138976   65211 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150187   65211 pod_ready.go:92] pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.150204   65211 pod_ready.go:81] duration metric: took 11.222328ms for pod "coredns-5dd5756b68-rlz67" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.150213   65211 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157054   65211 pod_ready.go:92] pod "etcd-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.157073   65211 pod_ready.go:81] duration metric: took 6.853876ms for pod "etcd-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.157086   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.167962   65211 pod_ready.go:92] pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.167986   65211 pod_ready.go:81] duration metric: took 10.892042ms for pod "kube-apiserver-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.168000   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177187   65211 pod_ready.go:92] pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.177204   65211 pod_ready.go:81] duration metric: took 9.197593ms for pod "kube-controller-manager-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.177213   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.515883   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.294780085s)
	I0318 22:03:55.515937   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.515948   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.515952   65211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.100869127s)
	I0318 22:03:55.515994   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516014   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516301   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516378   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516469   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516481   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516491   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516406   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516451   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516665   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.516683   65211 main.go:141] libmachine: Making call to close driver server
	I0318 22:03:55.516691   65211 main.go:141] libmachine: (embed-certs-141758) Calling .Close
	I0318 22:03:55.516772   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.516839   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.516867   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519334   65211 main.go:141] libmachine: (embed-certs-141758) DBG | Closing plugin on server side
	I0318 22:03:55.519340   65211 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:03:55.519355   65211 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:03:55.519365   65211 addons.go:470] Verifying addon metrics-server=true in "embed-certs-141758"
	I0318 22:03:55.520941   65211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 22:03:55.522318   65211 addons.go:505] duration metric: took 2.704813533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 22:03:55.545590   65211 pod_ready.go:92] pod "kube-proxy-jltc7" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.545614   65211 pod_ready.go:81] duration metric: took 368.395697ms for pod "kube-proxy-jltc7" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.545625   65211 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932726   65211 pod_ready.go:92] pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace has status "Ready":"True"
	I0318 22:03:55.932750   65211 pod_ready.go:81] duration metric: took 387.117475ms for pod "kube-scheduler-embed-certs-141758" in "kube-system" namespace to be "Ready" ...
	I0318 22:03:55.932757   65211 pod_ready.go:38] duration metric: took 2.821645915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:55.932771   65211 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:03:55.932815   65211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.969924   65211 api_server.go:72] duration metric: took 3.152691986s to wait for apiserver process to appear ...
	I0318 22:03:55.969955   65211 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.969977   65211 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I0318 22:03:55.976004   65211 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I0318 22:03:55.977450   65211 api_server.go:141] control plane version: v1.28.4
	I0318 22:03:55.977489   65211 api_server.go:131] duration metric: took 7.525909ms to wait for apiserver health ...
	I0318 22:03:55.977499   65211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:56.138403   65211 system_pods.go:59] 9 kube-system pods found
	I0318 22:03:56.138429   65211 system_pods.go:61] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.138434   65211 system_pods.go:61] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.138438   65211 system_pods.go:61] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.138441   65211 system_pods.go:61] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.138444   65211 system_pods.go:61] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.138448   65211 system_pods.go:61] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.138453   65211 system_pods.go:61] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.138462   65211 system_pods.go:61] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.138519   65211 system_pods.go:61] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 22:03:56.138532   65211 system_pods.go:74] duration metric: took 161.01924ms to wait for pod list to return data ...
	I0318 22:03:56.138544   65211 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:03:56.331884   65211 default_sa.go:45] found service account: "default"
	I0318 22:03:56.331926   65211 default_sa.go:55] duration metric: took 193.36174ms for default service account to be created ...
	I0318 22:03:56.331937   65211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:03:56.536411   65211 system_pods.go:86] 9 kube-system pods found
	I0318 22:03:56.536443   65211 system_pods.go:89] "coredns-5dd5756b68-k675p" [727682ae-0ac1-4854-a49c-0f6ae4384551] Running
	I0318 22:03:56.536452   65211 system_pods.go:89] "coredns-5dd5756b68-rlz67" [babdb200-b39a-4555-b14f-12e448531cf2] Running
	I0318 22:03:56.536459   65211 system_pods.go:89] "etcd-embed-certs-141758" [3bcdfefe-52f6-4268-8264-979d449c78e1] Running
	I0318 22:03:56.536466   65211 system_pods.go:89] "kube-apiserver-embed-certs-141758" [8ec768f3-abb4-488c-94f6-fb41bb26bfdb] Running
	I0318 22:03:56.536472   65211 system_pods.go:89] "kube-controller-manager-embed-certs-141758" [afa159fc-13e9-4c48-91d8-c21639ce0c01] Running
	I0318 22:03:56.536479   65211 system_pods.go:89] "kube-proxy-jltc7" [b6402012-bfc2-4049-b813-a9fa547277a7] Running
	I0318 22:03:56.536486   65211 system_pods.go:89] "kube-scheduler-embed-certs-141758" [91acf017-6120-478f-bcb5-d32b685f26c7] Running
	I0318 22:03:56.536497   65211 system_pods.go:89] "metrics-server-57f55c9bc5-pmkgs" [e180b0c7-9efd-4063-b7be-9947b5f9522d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:03:56.536507   65211 system_pods.go:89] "storage-provisioner" [3b08bb6c-9220-4ae9-83f9-0260b1e4a39f] Running
	I0318 22:03:56.536518   65211 system_pods.go:126] duration metric: took 204.57366ms to wait for k8s-apps to be running ...
	I0318 22:03:56.536531   65211 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:03:56.536579   65211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:03:56.557315   65211 system_svc.go:56] duration metric: took 20.775851ms WaitForService to wait for kubelet
	I0318 22:03:56.557344   65211 kubeadm.go:576] duration metric: took 3.740121987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:03:56.557375   65211 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:03:51.614216   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:51.614235   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.614239   65699 cri.go:89] found id: ""
	I0318 22:03:51.614245   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:51.614297   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.619100   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:51.623808   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:51.623827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:51.780027   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:51.780067   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:51.842134   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:51.842167   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:51.889769   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:51.889797   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:51.942502   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:51.942543   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:52.467986   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:52.468043   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:52.518980   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:52.519023   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:52.536546   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:52.536586   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:52.591854   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:52.591894   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:52.640783   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:52.640818   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:52.687934   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:52.687967   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:52.749690   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:52.749726   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:52.807019   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:52.807064   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:55.392930   65699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:03:55.415406   65699 api_server.go:72] duration metric: took 4m15.533409678s to wait for apiserver process to appear ...
	I0318 22:03:55.415435   65699 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:03:55.415472   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:55.415523   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:55.474200   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:55.474227   65699 cri.go:89] found id: ""
	I0318 22:03:55.474237   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:55.474295   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.479787   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:55.479907   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:55.532114   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:55.532136   65699 cri.go:89] found id: ""
	I0318 22:03:55.532145   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:55.532202   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.537215   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:55.537270   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:55.588633   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:55.588657   65699 cri.go:89] found id: ""
	I0318 22:03:55.588666   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:55.588723   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.595711   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:55.595777   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:55.646684   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:55.646704   65699 cri.go:89] found id: ""
	I0318 22:03:55.646714   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:55.646770   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.651920   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:55.651982   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:55.694948   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:55.694975   65699 cri.go:89] found id: ""
	I0318 22:03:55.694984   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:55.695035   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.700275   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:55.700343   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:55.740536   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:55.740559   65699 cri.go:89] found id: ""
	I0318 22:03:55.740568   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:55.740618   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.745384   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:55.745446   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:55.784614   65699 cri.go:89] found id: ""
	I0318 22:03:55.784645   65699 logs.go:276] 0 containers: []
	W0318 22:03:55.784657   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:55.784664   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:55.784727   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:55.827306   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:55.827334   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:55.827341   65699 cri.go:89] found id: ""
	I0318 22:03:55.827349   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:55.827404   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.832314   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:55.838497   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:03:55.838520   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:03:55.857285   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:03:55.857319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:03:55.984597   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:03:55.984629   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:56.044283   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:03:56.044339   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:56.100329   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:03:56.100363   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:56.173231   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:03:56.173270   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:56.221280   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:03:56.221310   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:03:56.274110   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:03:56.274138   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:03:56.332863   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:03:56.332891   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:56.374289   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:03:56.374317   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:56.423793   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:03:56.423827   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:56.478696   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:03:56.478734   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:56.518600   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:56.518627   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:03:56.731788   65211 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:03:56.731810   65211 node_conditions.go:123] node cpu capacity is 2
	I0318 22:03:56.731823   65211 node_conditions.go:105] duration metric: took 174.442649ms to run NodePressure ...
	I0318 22:03:56.731835   65211 start.go:240] waiting for startup goroutines ...
	I0318 22:03:56.731845   65211 start.go:245] waiting for cluster config update ...
	I0318 22:03:56.731857   65211 start.go:254] writing updated cluster config ...
	I0318 22:03:56.732109   65211 ssh_runner.go:195] Run: rm -f paused
	I0318 22:03:56.778660   65211 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:03:56.780431   65211 out.go:177] * Done! kubectl is now configured to use "embed-certs-141758" cluster and "default" namespace by default
	I0318 22:03:59.422001   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:03:59.422212   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:03:56.814631   65170 pod_ready.go:81] duration metric: took 4m0.000725499s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" ...
	E0318 22:03:56.814661   65170 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5dtf5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 22:03:56.814684   65170 pod_ready.go:38] duration metric: took 4m11.531709977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:03:56.814712   65170 kubeadm.go:591] duration metric: took 4m19.482098142s to restartPrimaryControlPlane
	W0318 22:03:56.814767   65170 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 22:03:56.814797   65170 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:03:59.480665   65699 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I0318 22:03:59.485792   65699 api_server.go:279] https://192.168.72.84:8443/healthz returned 200:
	ok
	I0318 22:03:59.487343   65699 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 22:03:59.487364   65699 api_server.go:131] duration metric: took 4.071921663s to wait for apiserver health ...
	I0318 22:03:59.487375   65699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:03:59.487406   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:03:59.487462   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:03:59.540845   65699 cri.go:89] found id: "d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:03:59.540872   65699 cri.go:89] found id: ""
	I0318 22:03:59.540881   65699 logs.go:276] 1 containers: [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce]
	I0318 22:03:59.540958   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.547759   65699 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:03:59.547824   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:03:59.593015   65699 cri.go:89] found id: "d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:03:59.593042   65699 cri.go:89] found id: ""
	I0318 22:03:59.593051   65699 logs.go:276] 1 containers: [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4]
	I0318 22:03:59.593106   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.598169   65699 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:03:59.598233   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:03:59.638484   65699 cri.go:89] found id: "95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:03:59.638508   65699 cri.go:89] found id: ""
	I0318 22:03:59.638517   65699 logs.go:276] 1 containers: [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540]
	I0318 22:03:59.638575   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.643353   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:03:59.643416   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:03:59.687190   65699 cri.go:89] found id: "4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:03:59.687208   65699 cri.go:89] found id: ""
	I0318 22:03:59.687216   65699 logs.go:276] 1 containers: [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5]
	I0318 22:03:59.687271   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.692481   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:03:59.692550   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:03:59.735798   65699 cri.go:89] found id: "757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:03:59.735824   65699 cri.go:89] found id: ""
	I0318 22:03:59.735834   65699 logs.go:276] 1 containers: [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5]
	I0318 22:03:59.735893   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.742192   65699 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:03:59.742263   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:03:59.782961   65699 cri.go:89] found id: "6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:03:59.782989   65699 cri.go:89] found id: ""
	I0318 22:03:59.783000   65699 logs.go:276] 1 containers: [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84]
	I0318 22:03:59.783060   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.788247   65699 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:03:59.788325   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:03:59.836955   65699 cri.go:89] found id: ""
	I0318 22:03:59.836983   65699 logs.go:276] 0 containers: []
	W0318 22:03:59.836992   65699 logs.go:278] No container was found matching "kindnet"
	I0318 22:03:59.836998   65699 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0318 22:03:59.837052   65699 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0318 22:03:59.879225   65699 cri.go:89] found id: "9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:03:59.879250   65699 cri.go:89] found id: "761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:03:59.879255   65699 cri.go:89] found id: ""
	I0318 22:03:59.879264   65699 logs.go:276] 2 containers: [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968]
	I0318 22:03:59.879323   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.884380   65699 ssh_runner.go:195] Run: which crictl
	I0318 22:03:59.889289   65699 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:03:59.889316   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:04:00.307344   65699 logs.go:123] Gathering logs for dmesg ...
	I0318 22:04:00.307389   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:04:00.325472   65699 logs.go:123] Gathering logs for etcd [d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4] ...
	I0318 22:04:00.325496   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d27b0e98d5f6777a705197affde72d266258815824ee87d58b4da83debb1fbe4"
	I0318 22:04:00.388254   65699 logs.go:123] Gathering logs for coredns [95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540] ...
	I0318 22:04:00.388288   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95d95025af7873220c6f5cc76a41c1f58e58730878917c344f557ca449995540"
	I0318 22:04:00.430203   65699 logs.go:123] Gathering logs for kube-scheduler [4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5] ...
	I0318 22:04:00.430241   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4896452ff8ddb01531e5ca9163d0ac4337ce48f03616312d876cc782646f5ce5"
	I0318 22:04:00.476834   65699 logs.go:123] Gathering logs for kube-controller-manager [6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84] ...
	I0318 22:04:00.476861   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b309d737fd2ff0404e580e629c4b4045c217458938b762fc03ee99a5e366b84"
	I0318 22:04:00.532672   65699 logs.go:123] Gathering logs for storage-provisioner [9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441] ...
	I0318 22:04:00.532703   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9559a9b3fa160ae8a0e605711a21812ec890ac9133fe955a2bdb5e5a4d77c441"
	I0318 22:04:00.572174   65699 logs.go:123] Gathering logs for storage-provisioner [761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968] ...
	I0318 22:04:00.572202   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761bc0d14f31ec6f95c014ba7af1680b07987f38ccd1ebf6e77c1731ac005968"
	I0318 22:04:00.624250   65699 logs.go:123] Gathering logs for container status ...
	I0318 22:04:00.624283   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 22:04:00.688520   65699 logs.go:123] Gathering logs for kubelet ...
	I0318 22:04:00.688551   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:04:00.764279   65699 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:04:00.764319   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 22:04:00.903231   65699 logs.go:123] Gathering logs for kube-apiserver [d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce] ...
	I0318 22:04:00.903262   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d723ad24bd61e718a72db45a5dd4f55baac120630ad01f9336f4e84a408306ce"
	I0318 22:04:00.974836   65699 logs.go:123] Gathering logs for kube-proxy [757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5] ...
	I0318 22:04:00.974869   65699 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 757a8fc5ae06dcd8fdfe183e1685d6e098e11566f8679c74880aaa67effcfdc5"
	I0318 22:04:03.547135   65699 system_pods.go:59] 8 kube-system pods found
	I0318 22:04:03.547166   65699 system_pods.go:61] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.547172   65699 system_pods.go:61] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.547180   65699 system_pods.go:61] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.547186   65699 system_pods.go:61] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.547193   65699 system_pods.go:61] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.547198   65699 system_pods.go:61] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.547208   65699 system_pods.go:61] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.547214   65699 system_pods.go:61] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.547224   65699 system_pods.go:74] duration metric: took 4.059842092s to wait for pod list to return data ...
	I0318 22:04:03.547233   65699 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:03.554656   65699 default_sa.go:45] found service account: "default"
	I0318 22:04:03.554682   65699 default_sa.go:55] duration metric: took 7.437557ms for default service account to be created ...
	I0318 22:04:03.554692   65699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:03.562342   65699 system_pods.go:86] 8 kube-system pods found
	I0318 22:04:03.562369   65699 system_pods.go:89] "coredns-76f75df574-6mtzp" [b5c2b5e8-23c6-493b-97cd-861ca5c9d28a] Running
	I0318 22:04:03.562374   65699 system_pods.go:89] "etcd-no-preload-963041" [6fc5168e-1788-4879-8d77-82ac96cf7568] Running
	I0318 22:04:03.562378   65699 system_pods.go:89] "kube-apiserver-no-preload-963041" [3db1f4ac-d71b-4c57-b7e7-4f6185145037] Running
	I0318 22:04:03.562383   65699 system_pods.go:89] "kube-controller-manager-no-preload-963041" [2f44918a-dc27-4a7d-935b-d519a1cdcbc6] Running
	I0318 22:04:03.562387   65699 system_pods.go:89] "kube-proxy-kkrzx" [7e568f4e-de96-4981-a397-cdf1a578c5b6] Running
	I0318 22:04:03.562391   65699 system_pods.go:89] "kube-scheduler-no-preload-963041" [4544bf72-8cf8-4d54-9f4b-26a07c15f448] Running
	I0318 22:04:03.562397   65699 system_pods.go:89] "metrics-server-57f55c9bc5-rdthh" [50c41dcb-a0bd-4098-a4f0-9eb619c8f2b2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 22:04:03.562402   65699 system_pods.go:89] "storage-provisioner" [d7579bb6-4512-4a79-adf6-40745192d451] Running
	I0318 22:04:03.562410   65699 system_pods.go:126] duration metric: took 7.712357ms to wait for k8s-apps to be running ...
	I0318 22:04:03.562424   65699 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:03.562470   65699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:03.579949   65699 system_svc.go:56] duration metric: took 17.517801ms WaitForService to wait for kubelet
	I0318 22:04:03.579977   65699 kubeadm.go:576] duration metric: took 4m23.697982351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:03.579993   65699 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:03.585009   65699 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:03.585037   65699 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:03.585049   65699 node_conditions.go:105] duration metric: took 5.050614ms to run NodePressure ...
	I0318 22:04:03.585063   65699 start.go:240] waiting for startup goroutines ...
	I0318 22:04:03.585075   65699 start.go:245] waiting for cluster config update ...
	I0318 22:04:03.585089   65699 start.go:254] writing updated cluster config ...
	I0318 22:04:03.585426   65699 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:03.634969   65699 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 22:04:03.637561   65699 out.go:177] * Done! kubectl is now configured to use "no-preload-963041" cluster and "default" namespace by default
	I0318 22:04:19.422826   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:19.423111   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:29.143869   65170 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.329052492s)
	I0318 22:04:29.143935   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:29.161708   65170 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 22:04:29.173738   65170 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:04:29.185221   65170 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:04:29.185241   65170 kubeadm.go:156] found existing configuration files:
	
	I0318 22:04:29.185273   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 22:04:29.196326   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:04:29.196382   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:04:29.207305   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 22:04:29.217759   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:04:29.217811   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:04:29.228350   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.239148   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:04:29.239191   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:04:29.251191   65170 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 22:04:29.262291   65170 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:04:29.262339   65170 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:04:29.273343   65170 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:04:29.332561   65170 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 22:04:29.333329   65170 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:04:29.496432   65170 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:04:29.496558   65170 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:04:29.496720   65170 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:04:29.728202   65170 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:04:29.730047   65170 out.go:204]   - Generating certificates and keys ...
	I0318 22:04:29.730126   65170 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:04:29.730202   65170 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:04:29.730297   65170 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:04:29.730669   65170 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:04:29.731209   65170 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:04:29.731887   65170 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:04:29.732569   65170 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:04:29.733362   65170 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:04:29.734045   65170 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:04:29.734477   65170 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:04:29.735264   65170 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:04:29.735340   65170 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:04:30.122363   65170 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:04:30.296021   65170 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:04:30.555774   65170 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:04:30.674403   65170 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:04:30.674943   65170 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:04:30.677509   65170 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:04:30.679219   65170 out.go:204]   - Booting up control plane ...
	I0318 22:04:30.679319   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:04:30.679402   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:04:30.681975   65170 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:04:30.701015   65170 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:04:30.701902   65170 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:04:30.702104   65170 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:04:30.843019   65170 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:04:36.846312   65170 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002976 seconds
	I0318 22:04:36.846520   65170 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 22:04:36.870892   65170 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 22:04:37.410373   65170 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 22:04:37.410649   65170 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-660775 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 22:04:37.935730   65170 kubeadm.go:309] [bootstrap-token] Using token: jwgiie.tp4r5ug6emevtbxj
	I0318 22:04:37.937024   65170 out.go:204]   - Configuring RBAC rules ...
	I0318 22:04:37.937156   65170 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 22:04:37.943204   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 22:04:37.951400   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 22:04:37.958005   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 22:04:37.962013   65170 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 22:04:37.965783   65170 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 22:04:37.985150   65170 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 22:04:38.241561   65170 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 22:04:38.355495   65170 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 22:04:38.356452   65170 kubeadm.go:309] 
	I0318 22:04:38.356511   65170 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 22:04:38.356520   65170 kubeadm.go:309] 
	I0318 22:04:38.356598   65170 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 22:04:38.356609   65170 kubeadm.go:309] 
	I0318 22:04:38.356667   65170 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 22:04:38.356774   65170 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 22:04:38.356828   65170 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 22:04:38.356844   65170 kubeadm.go:309] 
	I0318 22:04:38.356898   65170 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 22:04:38.356916   65170 kubeadm.go:309] 
	I0318 22:04:38.356976   65170 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 22:04:38.356984   65170 kubeadm.go:309] 
	I0318 22:04:38.357030   65170 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 22:04:38.357093   65170 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 22:04:38.357161   65170 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 22:04:38.357168   65170 kubeadm.go:309] 
	I0318 22:04:38.357263   65170 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 22:04:38.357364   65170 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 22:04:38.357376   65170 kubeadm.go:309] 
	I0318 22:04:38.357495   65170 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.357657   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc \
	I0318 22:04:38.357707   65170 kubeadm.go:309] 	--control-plane 
	I0318 22:04:38.357724   65170 kubeadm.go:309] 
	I0318 22:04:38.357861   65170 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 22:04:38.357873   65170 kubeadm.go:309] 
	I0318 22:04:38.357986   65170 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token jwgiie.tp4r5ug6emevtbxj \
	I0318 22:04:38.358144   65170 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:e0779c7b9d18444974652cbe71b93769d1f601814788d1082c85995799c13dcc 
	I0318 22:04:38.358726   65170 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:38.358772   65170 cni.go:84] Creating CNI manager for ""
	I0318 22:04:38.358789   65170 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 22:04:38.360246   65170 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 22:04:38.361264   65170 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 22:04:38.378420   65170 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 22:04:38.482111   65170 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 22:04:38.482178   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:38.482194   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-660775 minikube.k8s.io/updated_at=2024_03_18T22_04_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7632ee0069e86f793b1d91a60b11097c4ea27a76 minikube.k8s.io/name=default-k8s-diff-port-660775 minikube.k8s.io/primary=true
	I0318 22:04:38.617420   65170 ops.go:34] apiserver oom_adj: -16
	I0318 22:04:38.828087   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.328292   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:39.828411   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.328829   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:40.828338   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.329118   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:41.828239   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.328296   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:42.828241   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.329151   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:43.829036   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.328224   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:44.828465   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.328632   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:45.828289   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.328321   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:46.828493   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.329008   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:47.828789   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.328727   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:48.829024   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.329010   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:49.828311   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.328474   65170 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 22:04:50.445593   65170 kubeadm.go:1107] duration metric: took 11.963480655s to wait for elevateKubeSystemPrivileges
	W0318 22:04:50.445640   65170 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 22:04:50.445651   65170 kubeadm.go:393] duration metric: took 5m13.168616417s to StartCluster
	I0318 22:04:50.445672   65170 settings.go:142] acquiring lock: {Name:mke566d21080a5a475910b9510865078c2d5ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.445754   65170 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 22:04:50.447789   65170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/kubeconfig: {Name:mk10e5c5d2e765772d5b71e0dbe13c2fc419d7de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 22:04:50.448086   65170 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.150 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 22:04:50.449989   65170 out.go:177] * Verifying Kubernetes components...
	I0318 22:04:50.448238   65170 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 22:04:50.450030   65170 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450044   65170 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450068   65170 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-660775"
	I0318 22:04:50.450070   65170 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.450078   65170 addons.go:243] addon storage-provisioner should already be in state true
	W0318 22:04:50.450082   65170 addons.go:243] addon metrics-server should already be in state true
	I0318 22:04:50.450105   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450116   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.450033   65170 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-660775"
	I0318 22:04:50.450181   65170 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-660775"
	I0318 22:04:50.450493   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450516   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450550   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.450585   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.450628   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.448310   65170 config.go:182] Loaded profile config "default-k8s-diff-port-660775": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 22:04:50.452465   65170 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 22:04:50.466764   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0318 22:04:50.468214   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
	I0318 22:04:50.468460   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.468676   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469019   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469038   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469182   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.469195   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.469254   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0318 22:04:50.469549   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.469605   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.469603   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470035   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.470053   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.470320   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470350   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.470381   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470385   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.470395   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.470535   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.473854   65170 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-660775"
	W0318 22:04:50.473879   65170 addons.go:243] addon default-storageclass should already be in state true
	I0318 22:04:50.473907   65170 host.go:66] Checking if "default-k8s-diff-port-660775" exists ...
	I0318 22:04:50.474268   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.474301   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.485707   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
	I0318 22:04:50.486097   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0318 22:04:50.486278   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486675   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.486809   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.486818   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487074   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.487086   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.487345   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487513   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.487561   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.487759   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.489284   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.491084   65170 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 22:04:50.489730   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.492156   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0318 22:04:50.492539   65170 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.492549   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 22:04:50.492563   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.494057   65170 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 22:04:50.492998   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.495232   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 22:04:50.495253   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 22:04:50.495275   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.495863   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.495887   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.495952   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496316   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.496340   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.496476   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.496620   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.496757   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.496861   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.497350   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.498004   65170 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 22:04:50.498047   65170 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 22:04:50.498450   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.499027   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.499235   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.499406   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.499565   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.499691   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.515126   65170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0318 22:04:50.515913   65170 main.go:141] libmachine: () Calling .GetVersion
	I0318 22:04:50.516473   65170 main.go:141] libmachine: Using API Version  1
	I0318 22:04:50.516498   65170 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 22:04:50.516800   65170 main.go:141] libmachine: () Calling .GetMachineName
	I0318 22:04:50.517008   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetState
	I0318 22:04:50.518559   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .DriverName
	I0318 22:04:50.518811   65170 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.518825   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 22:04:50.518842   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHHostname
	I0318 22:04:50.522625   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523156   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:9c:26", ip: ""} in network mk-default-k8s-diff-port-660775: {Iface:virbr1 ExpiryTime:2024-03-18 22:59:21 +0000 UTC Type:0 Mac:52:54:00:80:9c:26 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:default-k8s-diff-port-660775 Clientid:01:52:54:00:80:9c:26}
	I0318 22:04:50.523537   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | domain default-k8s-diff-port-660775 has defined IP address 192.168.50.150 and MAC address 52:54:00:80:9c:26 in network mk-default-k8s-diff-port-660775
	I0318 22:04:50.523810   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHPort
	I0318 22:04:50.523984   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHKeyPath
	I0318 22:04:50.524193   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .GetSSHUsername
	I0318 22:04:50.524430   65170 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/default-k8s-diff-port-660775/id_rsa Username:docker}
	I0318 22:04:50.682066   65170 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 22:04:50.699269   65170 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709309   65170 node_ready.go:49] node "default-k8s-diff-port-660775" has status "Ready":"True"
	I0318 22:04:50.709330   65170 node_ready.go:38] duration metric: took 10.026001ms for node "default-k8s-diff-port-660775" to be "Ready" ...
	I0318 22:04:50.709342   65170 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.713958   65170 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720434   65170 pod_ready.go:92] pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.720459   65170 pod_ready.go:81] duration metric: took 6.477329ms for pod "etcd-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.720471   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725799   65170 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.725820   65170 pod_ready.go:81] duration metric: took 5.341405ms for pod "kube-apiserver-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.725829   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.730987   65170 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.731006   65170 pod_ready.go:81] duration metric: took 5.171376ms for pod "kube-controller-manager-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.731016   65170 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737458   65170 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace has status "Ready":"True"
	I0318 22:04:50.737481   65170 pod_ready.go:81] duration metric: took 6.458242ms for pod "kube-scheduler-default-k8s-diff-port-660775" in "kube-system" namespace to be "Ready" ...
	I0318 22:04:50.737490   65170 pod_ready.go:38] duration metric: took 28.137606ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 22:04:50.737506   65170 api_server.go:52] waiting for apiserver process to appear ...
	I0318 22:04:50.737560   65170 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 22:04:50.757770   65170 api_server.go:72] duration metric: took 309.622189ms to wait for apiserver process to appear ...
	I0318 22:04:50.757795   65170 api_server.go:88] waiting for apiserver healthz status ...
	I0318 22:04:50.757815   65170 api_server.go:253] Checking apiserver healthz at https://192.168.50.150:8444/healthz ...
	I0318 22:04:50.765732   65170 api_server.go:279] https://192.168.50.150:8444/healthz returned 200:
	ok
	I0318 22:04:50.769202   65170 api_server.go:141] control plane version: v1.28.4
	I0318 22:04:50.769228   65170 api_server.go:131] duration metric: took 11.424563ms to wait for apiserver health ...
	I0318 22:04:50.769238   65170 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 22:04:50.831223   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 22:04:50.859994   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 22:04:50.860014   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 22:04:50.864994   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 22:04:50.905212   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 22:04:50.905257   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 22:04:50.918389   65170 system_pods.go:59] 4 kube-system pods found
	I0318 22:04:50.918416   65170 system_pods.go:61] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:50.918422   65170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:50.918426   65170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:50.918429   65170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:50.918435   65170 system_pods.go:74] duration metric: took 149.190745ms to wait for pod list to return data ...
	I0318 22:04:50.918442   65170 default_sa.go:34] waiting for default service account to be created ...
	I0318 22:04:50.993150   65170 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:50.993174   65170 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 22:04:51.056974   65170 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 22:04:51.124585   65170 default_sa.go:45] found service account: "default"
	I0318 22:04:51.124612   65170 default_sa.go:55] duration metric: took 206.163161ms for default service account to be created ...
	I0318 22:04:51.124624   65170 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 22:04:51.347373   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.347408   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347419   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.347426   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.347433   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.347440   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.347452   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.347458   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.347478   65170 retry.go:31] will retry after 201.830143ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.556559   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.556594   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556605   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.556621   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.556630   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.556638   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.556648   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.556663   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.556681   65170 retry.go:31] will retry after 312.139871ms: missing components: kube-dns, kube-proxy
	I0318 22:04:51.878515   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:51.878546   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878554   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:51.878562   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:51.878568   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:51.878573   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:51.878579   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:51.878582   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:51.878596   65170 retry.go:31] will retry after 379.864885ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.364944   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.364971   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364979   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 22:04:52.364987   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.364995   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.365002   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.365011   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 22:04:52.365018   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.365039   65170 retry.go:31] will retry after 598.040475ms: missing components: kube-dns, kube-proxy
	I0318 22:04:52.752856   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921596456s)
	I0318 22:04:52.752915   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.752928   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753278   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753303   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.753314   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.753323   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.753565   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.753580   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.781081   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:52.781102   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:52.781396   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:52.781417   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:52.973228   65170 system_pods.go:86] 7 kube-system pods found
	I0318 22:04:52.973256   65170 system_pods.go:89] "coredns-5dd5756b68-55f9q" [ce919323-edf8-4caf-8952-2ec4ac6593cd] Running
	I0318 22:04:52.973262   65170 system_pods.go:89] "coredns-5dd5756b68-vmj4l" [4916e690-e21f-4eae-aa11-74ad6c0b7f49] Running
	I0318 22:04:52.973269   65170 system_pods.go:89] "etcd-default-k8s-diff-port-660775" [a3b1b5d0-ba12-4060-931d-889cd91f1155] Running
	I0318 22:04:52.973275   65170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-660775" [f0af1756-de5c-469b-83e3-8c5e314ecade] Running
	I0318 22:04:52.973282   65170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-660775" [c2d62dc5-f4e2-4090-8786-70ff30bea78b] Running
	I0318 22:04:52.973289   65170 system_pods.go:89] "kube-proxy-z2dsq" [8f8591de-c0b4-4e0b-9e4f-623b58a59d08] Running
	I0318 22:04:52.973295   65170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-660775" [d7370841-cf18-463a-8511-3308767daf8f] Running
	I0318 22:04:52.973304   65170 system_pods.go:126] duration metric: took 1.848673952s to wait for k8s-apps to be running ...
	I0318 22:04:52.973310   65170 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 22:04:52.973361   65170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:04:53.343164   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.286142485s)
	I0318 22:04:53.343193   65170 system_svc.go:56] duration metric: took 369.874916ms WaitForService to wait for kubelet
	I0318 22:04:53.343215   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343229   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343216   65170 kubeadm.go:576] duration metric: took 2.89507195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 22:04:53.343238   65170 node_conditions.go:102] verifying NodePressure condition ...
	I0318 22:04:53.343265   65170 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.478242665s)
	I0318 22:04:53.343301   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343311   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.343510   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.343555   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.343564   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.343572   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.343580   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345065   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345078   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345082   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345065   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345094   65170 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-660775"
	I0318 22:04:53.345094   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345117   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.345127   65170 main.go:141] libmachine: Making call to close driver server
	I0318 22:04:53.345136   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) Calling .Close
	I0318 22:04:53.345401   65170 main.go:141] libmachine: (default-k8s-diff-port-660775) DBG | Closing plugin on server side
	I0318 22:04:53.345400   65170 main.go:141] libmachine: Successfully made call to close driver server
	I0318 22:04:53.345419   65170 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 22:04:53.347668   65170 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0318 22:04:53.348839   65170 addons.go:505] duration metric: took 2.900603006s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0318 22:04:53.363245   65170 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 22:04:53.363274   65170 node_conditions.go:123] node cpu capacity is 2
	I0318 22:04:53.363307   65170 node_conditions.go:105] duration metric: took 20.053581ms to run NodePressure ...
	I0318 22:04:53.363325   65170 start.go:240] waiting for startup goroutines ...
	I0318 22:04:53.363339   65170 start.go:245] waiting for cluster config update ...
	I0318 22:04:53.363353   65170 start.go:254] writing updated cluster config ...
	I0318 22:04:53.363674   65170 ssh_runner.go:195] Run: rm -f paused
	I0318 22:04:53.429018   65170 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 22:04:53.430584   65170 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-660775" cluster and "default" namespace by default
	I0318 22:04:59.424318   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:04:59.425052   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:04:59.425084   65622 kubeadm.go:309] 
	I0318 22:04:59.425146   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:04:59.425207   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:04:59.425223   65622 kubeadm.go:309] 
	I0318 22:04:59.425262   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:04:59.425298   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:04:59.425454   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:04:59.425481   65622 kubeadm.go:309] 
	I0318 22:04:59.425647   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:04:59.425704   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:04:59.425752   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:04:59.425762   65622 kubeadm.go:309] 
	I0318 22:04:59.425917   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:04:59.426033   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:04:59.426045   65622 kubeadm.go:309] 
	I0318 22:04:59.426212   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:04:59.426346   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:04:59.426454   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:04:59.426547   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:04:59.426558   65622 kubeadm.go:309] 
	I0318 22:04:59.427148   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:04:59.427271   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:04:59.427372   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 22:04:59.427528   65622 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 22:04:59.427572   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 22:05:00.055064   65622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 22:05:00.070514   65622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 22:05:00.083916   65622 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 22:05:00.083938   65622 kubeadm.go:156] found existing configuration files:
	
	I0318 22:05:00.083984   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 22:05:00.095316   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 22:05:00.095362   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 22:05:00.106457   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 22:05:00.117255   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 22:05:00.117309   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 22:05:00.128432   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.138314   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 22:05:00.138371   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 22:05:00.148443   65622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 22:05:00.158539   65622 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 22:05:00.158585   65622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 22:05:00.169165   65622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 22:05:00.245400   65622 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 22:05:00.245473   65622 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 22:05:00.417644   65622 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 22:05:00.417785   65622 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 22:05:00.417883   65622 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 22:05:00.634147   65622 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 22:05:00.635738   65622 out.go:204]   - Generating certificates and keys ...
	I0318 22:05:00.635843   65622 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 22:05:00.635930   65622 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 22:05:00.636028   65622 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 22:05:00.636089   65622 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 22:05:00.636314   65622 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 22:05:00.636537   65622 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 22:05:00.636954   65622 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 22:05:00.637502   65622 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 22:05:00.637924   65622 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 22:05:00.638340   65622 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 22:05:00.638425   65622 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 22:05:00.638514   65622 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 22:05:00.913839   65622 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 22:05:00.990231   65622 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 22:05:01.230957   65622 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 22:05:01.548589   65622 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 22:05:01.567890   65622 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 22:05:01.569831   65622 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 22:05:01.569913   65622 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 22:05:01.734815   65622 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 22:05:01.736685   65622 out.go:204]   - Booting up control plane ...
	I0318 22:05:01.736810   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 22:05:01.749926   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 22:05:01.751335   65622 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 22:05:01.753793   65622 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 22:05:01.754600   65622 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 22:05:41.756944   65622 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 22:05:41.757321   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:41.757565   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:46.758228   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:46.758483   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:05:56.759061   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:05:56.759280   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:16.760134   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:16.760369   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761317   65622 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 22:06:56.761611   65622 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 22:06:56.761630   65622 kubeadm.go:309] 
	I0318 22:06:56.761682   65622 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 22:06:56.761725   65622 kubeadm.go:309] 		timed out waiting for the condition
	I0318 22:06:56.761732   65622 kubeadm.go:309] 
	I0318 22:06:56.761782   65622 kubeadm.go:309] 	This error is likely caused by:
	I0318 22:06:56.761829   65622 kubeadm.go:309] 		- The kubelet is not running
	I0318 22:06:56.761971   65622 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 22:06:56.761988   65622 kubeadm.go:309] 
	I0318 22:06:56.762111   65622 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 22:06:56.762159   65622 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 22:06:56.762207   65622 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 22:06:56.762221   65622 kubeadm.go:309] 
	I0318 22:06:56.762382   65622 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 22:06:56.762502   65622 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 22:06:56.762512   65622 kubeadm.go:309] 
	I0318 22:06:56.762630   65622 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 22:06:56.762758   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 22:06:56.762856   65622 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 22:06:56.762985   65622 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 22:06:56.763011   65622 kubeadm.go:309] 
	I0318 22:06:56.763456   65622 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 22:06:56.763590   65622 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 22:06:56.763681   65622 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 22:06:56.763764   65622 kubeadm.go:393] duration metric: took 7m58.719030677s to StartCluster
	I0318 22:06:56.763817   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 22:06:56.763885   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 22:06:56.813440   65622 cri.go:89] found id: ""
	I0318 22:06:56.813469   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.813480   65622 logs.go:278] No container was found matching "kube-apiserver"
	I0318 22:06:56.813487   65622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 22:06:56.813553   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 22:06:56.852826   65622 cri.go:89] found id: ""
	I0318 22:06:56.852854   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.852865   65622 logs.go:278] No container was found matching "etcd"
	I0318 22:06:56.852872   65622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 22:06:56.852949   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 22:06:56.894024   65622 cri.go:89] found id: ""
	I0318 22:06:56.894049   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.894057   65622 logs.go:278] No container was found matching "coredns"
	I0318 22:06:56.894062   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 22:06:56.894123   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 22:06:56.932924   65622 cri.go:89] found id: ""
	I0318 22:06:56.932955   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.932967   65622 logs.go:278] No container was found matching "kube-scheduler"
	I0318 22:06:56.932975   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 22:06:56.933033   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 22:06:56.973307   65622 cri.go:89] found id: ""
	I0318 22:06:56.973336   65622 logs.go:276] 0 containers: []
	W0318 22:06:56.973344   65622 logs.go:278] No container was found matching "kube-proxy"
	I0318 22:06:56.973350   65622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 22:06:56.973405   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 22:06:57.009107   65622 cri.go:89] found id: ""
	I0318 22:06:57.009134   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.009142   65622 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 22:06:57.009151   65622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 22:06:57.009213   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 22:06:57.046883   65622 cri.go:89] found id: ""
	I0318 22:06:57.046912   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.046922   65622 logs.go:278] No container was found matching "kindnet"
	I0318 22:06:57.046930   65622 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 22:06:57.046991   65622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 22:06:57.087670   65622 cri.go:89] found id: ""
	I0318 22:06:57.087698   65622 logs.go:276] 0 containers: []
	W0318 22:06:57.087709   65622 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 22:06:57.087722   65622 logs.go:123] Gathering logs for kubelet ...
	I0318 22:06:57.087736   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 22:06:57.143284   65622 logs.go:123] Gathering logs for dmesg ...
	I0318 22:06:57.143320   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 22:06:57.159775   65622 logs.go:123] Gathering logs for describe nodes ...
	I0318 22:06:57.159803   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 22:06:57.248520   65622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 22:06:57.248548   65622 logs.go:123] Gathering logs for CRI-O ...
	I0318 22:06:57.248563   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 22:06:57.368197   65622 logs.go:123] Gathering logs for container status ...
	I0318 22:06:57.368230   65622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 22:06:57.413080   65622 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 22:06:57.413134   65622 out.go:239] * 
	W0318 22:06:57.413205   65622 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.413237   65622 out.go:239] * 
	W0318 22:06:57.414373   65622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 22:06:57.417746   65622 out.go:177] 
	W0318 22:06:57.418940   65622 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 22:06:57.419004   65622 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 22:06:57.419028   65622 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 22:06:57.420531   65622 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.020421402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800264020394906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47a3340f-5cd2-4bfd-8aff-fee32c07dbb3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.020939930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adf39a5a-3e93-4c1a-8635-0f8b0e3082ae name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.020988537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adf39a5a-3e93-4c1a-8635-0f8b0e3082ae name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.021018766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=adf39a5a-3e93-4c1a-8635-0f8b0e3082ae name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.063822366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0a936c2-7c5e-4227-a8c1-708fa297b1ec name=/runtime.v1.RuntimeService/Version
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.063930409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0a936c2-7c5e-4227-a8c1-708fa297b1ec name=/runtime.v1.RuntimeService/Version
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.066164684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cc95af6-ba82-4cbe-8c6b-0b3f2683bd60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.066613523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800264066591509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cc95af6-ba82-4cbe-8c6b-0b3f2683bd60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.067691635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4657baa-e052-4fa5-aad5-f2ce88120ffe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.067747541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4657baa-e052-4fa5-aad5-f2ce88120ffe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.067784400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e4657baa-e052-4fa5-aad5-f2ce88120ffe name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.101964922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0eda5723-9911-4b41-b77c-26ec8318c777 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.102134591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0eda5723-9911-4b41-b77c-26ec8318c777 name=/runtime.v1.RuntimeService/Version
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.105424447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=508a715c-4f7f-433c-901c-1fe010578617 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.105811008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800264105786807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=508a715c-4f7f-433c-901c-1fe010578617 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.106400156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63915987-e7e9-4ce6-8e61-09b3fd5827c4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.106453433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63915987-e7e9-4ce6-8e61-09b3fd5827c4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.106496694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=63915987-e7e9-4ce6-8e61-09b3fd5827c4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.146475533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9be8280c-1fa4-43a3-9290-34321419a82c name=/runtime.v1.RuntimeService/Version
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.146575183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9be8280c-1fa4-43a3-9290-34321419a82c name=/runtime.v1.RuntimeService/Version
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.147949665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92ccc140-1330-4a72-8175-90cb926e9ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.148580254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710800264148551171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92ccc140-1330-4a72-8175-90cb926e9ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.149457583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9594ca6a-4306-480f-b4a8-05c8b375acad name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.149722272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9594ca6a-4306-480f-b4a8-05c8b375acad name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 22:17:44 old-k8s-version-648232 crio[655]: time="2024-03-18 22:17:44.149837398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9594ca6a-4306-480f-b4a8-05c8b375acad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 21:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055911] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044955] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.805580] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387906] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.740121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.008894] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.062333] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062747] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.184160] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.169340] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.291707] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +7.214394] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.068357] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.838802] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Mar18 21:59] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 22:03] systemd-fstab-generator[4983]: Ignoring "noauto" option for root device
	[Mar18 22:05] systemd-fstab-generator[5260]: Ignoring "noauto" option for root device
	[  +0.070654] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:17:44 up 19 min,  0 users,  load average: 0.01, 0.08, 0.08
	Linux old-k8s-version-648232 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: net.(*sysDialer).dialSerial(0xc000be6000, 0x4f7fe40, 0xc000da7440, 0xc0005f4680, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/dial.go:548 +0x152
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: net.(*Dialer).DialContext(0xc000c1c480, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c69ce0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c28d00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c69ce0, 0x24, 0x1000000000060, 0x7f2f21abf698, 0x118, ...)
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: net/http.(*Transport).dial(0xc0001c7540, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c69ce0, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: net/http.(*Transport).dialConn(0xc0001c7540, 0x4f7fe00, 0xc000120018, 0x0, 0xc000101620, 0x5, 0xc000c69ce0, 0x24, 0x0, 0xc0003d0240, ...)
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: net/http.(*Transport).dialConnFor(0xc0001c7540, 0xc000b9c160)
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]: created by net/http.(*Transport).queueForDial
	Mar 18 22:17:41 old-k8s-version-648232 kubelet[6703]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 18 22:17:41 old-k8s-version-648232 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 22:17:41 old-k8s-version-648232 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 22:17:42 old-k8s-version-648232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 131.
	Mar 18 22:17:42 old-k8s-version-648232 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 22:17:42 old-k8s-version-648232 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 22:17:42 old-k8s-version-648232 kubelet[6712]: I0318 22:17:42.235652    6712 server.go:416] Version: v1.20.0
	Mar 18 22:17:42 old-k8s-version-648232 kubelet[6712]: I0318 22:17:42.236345    6712 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 22:17:42 old-k8s-version-648232 kubelet[6712]: I0318 22:17:42.238590    6712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 22:17:42 old-k8s-version-648232 kubelet[6712]: I0318 22:17:42.240143    6712 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 18 22:17:42 old-k8s-version-648232 kubelet[6712]: W0318 22:17:42.240330    6712 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 2 (246.130864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-648232" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (101.48s)

                                                
                                    

Test pass (257/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 52.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.28.4/json-events 47.86
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 51.93
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.13
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.56
31 TestOffline 123.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 216.68
38 TestAddons/parallel/Registry 22.77
40 TestAddons/parallel/InspektorGadget 11.32
41 TestAddons/parallel/MetricsServer 6.3
42 TestAddons/parallel/HelmTiller 14.48
44 TestAddons/parallel/CSI 48.41
45 TestAddons/parallel/Headlamp 17.41
46 TestAddons/parallel/CloudSpanner 6.7
48 TestAddons/parallel/NvidiaDevicePlugin 6.03
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 60.58
55 TestCertExpiration 417.47
57 TestForceSystemdFlag 114.4
58 TestForceSystemdEnv 47.23
60 TestKVMDriverInstallOrUpdate 4.54
64 TestErrorSpam/setup 44.78
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.6
68 TestErrorSpam/unpause 1.72
69 TestErrorSpam/stop 6.33
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 97.68
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 42.38
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
81 TestFunctional/serial/CacheCmd/cache/add_local 2.28
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
83 TestFunctional/serial/CacheCmd/cache/list 0.05
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 32.47
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.46
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 4.77
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 21.99
97 TestFunctional/parallel/DryRun 0.29
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 1.15
103 TestFunctional/parallel/ServiceCmdConnect 11.65
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 41.67
107 TestFunctional/parallel/SSHCmd 0.47
108 TestFunctional/parallel/CpCmd 1.61
109 TestFunctional/parallel/MySQL 26.85
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.69
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
119 TestFunctional/parallel/License 0.64
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
122 TestFunctional/parallel/ProfileCmd/profile_list 0.33
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
124 TestFunctional/parallel/MountCmd/any-port 10.74
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.53
130 TestFunctional/parallel/ImageCommands/Setup 2.03
131 TestFunctional/parallel/ServiceCmd/List 0.87
132 TestFunctional/parallel/MountCmd/specific-port 1.57
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.43
136 TestFunctional/parallel/ServiceCmd/Format 0.42
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
138 TestFunctional/parallel/ServiceCmd/URL 0.42
139 TestFunctional/parallel/Version/short 0.06
140 TestFunctional/parallel/Version/components 1
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.14
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.22
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.84
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.43
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.34
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 300.64
166 TestMultiControlPlane/serial/DeployApp 6.89
167 TestMultiControlPlane/serial/PingHostFromPods 1.38
168 TestMultiControlPlane/serial/AddWorkerNode 47.44
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
171 TestMultiControlPlane/serial/CopyFile 13.44
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.38
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
180 TestMultiControlPlane/serial/RestartCluster 264.58
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
182 TestMultiControlPlane/serial/AddSecondaryNode 89.42
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
187 TestJSONOutput/start/Command 89.29
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.74
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.68
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.47
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 92.16
219 TestMountStart/serial/StartWithMountFirst 27.87
220 TestMountStart/serial/VerifyMountFirst 0.38
221 TestMountStart/serial/StartWithMountSecond 29.57
222 TestMountStart/serial/VerifyMountSecond 0.38
223 TestMountStart/serial/DeleteFirst 0.68
224 TestMountStart/serial/VerifyMountPostDelete 0.38
225 TestMountStart/serial/Stop 1.41
226 TestMountStart/serial/RestartStopped 23.09
227 TestMountStart/serial/VerifyMountPostStop 0.38
230 TestMultiNode/serial/FreshStart2Nodes 107.9
231 TestMultiNode/serial/DeployApp2Nodes 6.58
232 TestMultiNode/serial/PingHostFrom2Pods 0.87
233 TestMultiNode/serial/AddNode 40.65
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.49
237 TestMultiNode/serial/StopNode 3.16
238 TestMultiNode/serial/StartAfterStop 29.87
240 TestMultiNode/serial/DeleteNode 2.19
242 TestMultiNode/serial/RestartMultiNode 173.54
243 TestMultiNode/serial/ValidateNameConflict 46.85
250 TestScheduledStopUnix 117.08
254 TestRunningBinaryUpgrade 200.24
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 98.99
268 TestNetworkPlugins/group/false 3.73
272 TestNoKubernetes/serial/StartWithStopK8s 40.41
273 TestNoKubernetes/serial/Start 49.6
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
275 TestNoKubernetes/serial/ProfileList 0.86
276 TestNoKubernetes/serial/Stop 1.41
277 TestNoKubernetes/serial/StartNoArgs 69.97
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
279 TestStoppedBinaryUpgrade/Setup 2.51
280 TestStoppedBinaryUpgrade/Upgrade 99.93
289 TestPause/serial/Start 119.95
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
291 TestNetworkPlugins/group/auto/Start 71.7
292 TestNetworkPlugins/group/auto/KubeletFlags 0.24
293 TestNetworkPlugins/group/auto/NetCatPod 12.22
294 TestPause/serial/SecondStartNoReconfiguration 40.76
295 TestNetworkPlugins/group/auto/DNS 25.97
296 TestNetworkPlugins/group/kindnet/Start 65.17
297 TestNetworkPlugins/group/auto/Localhost 0.17
298 TestNetworkPlugins/group/auto/HairPin 0.13
299 TestPause/serial/Pause 1.09
300 TestPause/serial/VerifyStatus 0.36
301 TestPause/serial/Unpause 0.94
302 TestPause/serial/PauseAgain 1.23
303 TestPause/serial/DeletePaused 1.58
304 TestNetworkPlugins/group/calico/Start 108.83
305 TestPause/serial/VerifyDeletedResources 3.35
306 TestNetworkPlugins/group/custom-flannel/Start 125.34
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
309 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
310 TestNetworkPlugins/group/kindnet/DNS 0.2
311 TestNetworkPlugins/group/kindnet/Localhost 0.19
312 TestNetworkPlugins/group/kindnet/HairPin 0.17
313 TestNetworkPlugins/group/enable-default-cni/Start 115.36
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/flannel/Start 86.99
316 TestNetworkPlugins/group/calico/KubeletFlags 0.35
317 TestNetworkPlugins/group/calico/NetCatPod 13.8
318 TestNetworkPlugins/group/calico/DNS 0.17
319 TestNetworkPlugins/group/calico/Localhost 0.17
320 TestNetworkPlugins/group/calico/HairPin 0.15
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
323 TestNetworkPlugins/group/custom-flannel/DNS 0.25
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
326 TestNetworkPlugins/group/bridge/Start 102.57
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
334 TestNetworkPlugins/group/flannel/ControllerPod 6.01
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
336 TestNetworkPlugins/group/flannel/NetCatPod 13.86
338 TestStartStop/group/no-preload/serial/FirstStart 194.39
339 TestNetworkPlugins/group/flannel/DNS 0.21
340 TestNetworkPlugins/group/flannel/Localhost 0.17
341 TestNetworkPlugins/group/flannel/HairPin 0.16
343 TestStartStop/group/embed-certs/serial/FirstStart 110.52
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
345 TestNetworkPlugins/group/bridge/NetCatPod 11.25
346 TestNetworkPlugins/group/bridge/DNS 0.17
347 TestNetworkPlugins/group/bridge/Localhost 0.15
348 TestNetworkPlugins/group/bridge/HairPin 0.15
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.66
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
352 TestStartStop/group/embed-certs/serial/DeployApp 9.35
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
357 TestStartStop/group/no-preload/serial/DeployApp 11.29
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 683.56
365 TestStartStop/group/embed-certs/serial/SecondStart 625.56
367 TestStartStop/group/old-k8s-version/serial/Stop 4.47
368 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
370 TestStartStop/group/no-preload/serial/SecondStart 567.37
380 TestStartStop/group/newest-cni/serial/FirstStart 58.73
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.6
383 TestStartStop/group/newest-cni/serial/Stop 11.67
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
385 TestStartStop/group/newest-cni/serial/SecondStart 40.47
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/newest-cni/serial/Pause 2.53
x
+
TestDownloadOnly/v1.20.0/json-events (52.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-029911 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-029911 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (52.540673415s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (52.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-029911
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-029911: exit status 85 (66.825562ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-029911 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC |          |
	|         | -p download-only-029911        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:29:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:29:03.191095   12580 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:29:03.191204   12580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:29:03.191213   12580 out.go:304] Setting ErrFile to fd 2...
	I0318 20:29:03.191217   12580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:29:03.191383   12580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	W0318 20:29:03.191494   12580 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18421-5321/.minikube/config/config.json: open /home/jenkins/minikube-integration/18421-5321/.minikube/config/config.json: no such file or directory
	I0318 20:29:03.192034   12580 out.go:298] Setting JSON to true
	I0318 20:29:03.192916   12580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":687,"bootTime":1710793056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:29:03.192973   12580 start.go:139] virtualization: kvm guest
	I0318 20:29:03.195240   12580 out.go:97] [download-only-029911] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:29:03.196613   12580 out.go:169] MINIKUBE_LOCATION=18421
	W0318 20:29:03.195323   12580 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 20:29:03.195372   12580 notify.go:220] Checking for updates...
	I0318 20:29:03.199014   12580 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:29:03.200146   12580 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:29:03.201217   12580 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:29:03.202379   12580 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 20:29:03.204450   12580 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 20:29:03.204657   12580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:29:03.297636   12580 out.go:97] Using the kvm2 driver based on user configuration
	I0318 20:29:03.297667   12580 start.go:297] selected driver: kvm2
	I0318 20:29:03.297673   12580 start.go:901] validating driver "kvm2" against <nil>
	I0318 20:29:03.298093   12580 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:29:03.298234   12580 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:29:03.311877   12580 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:29:03.311923   12580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 20:29:03.312404   12580 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 20:29:03.312549   12580 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 20:29:03.312605   12580 cni.go:84] Creating CNI manager for ""
	I0318 20:29:03.312619   12580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 20:29:03.312628   12580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 20:29:03.312670   12580 start.go:340] cluster config:
	{Name:download-only-029911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-029911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:29:03.312824   12580 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:29:03.314538   12580 out.go:97] Downloading VM boot image ...
	I0318 20:29:03.314569   12580 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0318 20:29:12.741899   12580 out.go:97] Starting "download-only-029911" primary control-plane node in "download-only-029911" cluster
	I0318 20:29:12.741925   12580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 20:29:12.863713   12580 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 20:29:12.863744   12580 cache.go:56] Caching tarball of preloaded images
	I0318 20:29:12.863935   12580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 20:29:12.865741   12580 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 20:29:12.865761   12580 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:29:12.973400   12580 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 20:29:26.704101   12580 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:29:26.704193   12580 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:29:27.602823   12580 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 20:29:27.603163   12580 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/download-only-029911/config.json ...
	I0318 20:29:27.603191   12580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/download-only-029911/config.json: {Name:mkca9bd0f6b515516d9a1e8be6c3e84195de6a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:29:27.603342   12580 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 20:29:27.603501   12580 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-029911 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029911"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-029911
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (47.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-820089 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-820089 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (47.85706644s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (47.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-820089
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-820089: exit status 85 (68.058959ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-029911 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC |                     |
	|         | -p download-only-029911        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC | 18 Mar 24 20:29 UTC |
	| delete  | -p download-only-029911        | download-only-029911 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC | 18 Mar 24 20:29 UTC |
	| start   | -o=json --download-only        | download-only-820089 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC |                     |
	|         | -p download-only-820089        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:29:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:29:56.062387   12847 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:29:56.062516   12847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:29:56.062528   12847 out.go:304] Setting ErrFile to fd 2...
	I0318 20:29:56.062533   12847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:29:56.062742   12847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:29:56.063314   12847 out.go:298] Setting JSON to true
	I0318 20:29:56.064260   12847 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":740,"bootTime":1710793056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:29:56.064326   12847 start.go:139] virtualization: kvm guest
	I0318 20:29:56.066332   12847 out.go:97] [download-only-820089] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:29:56.067820   12847 out.go:169] MINIKUBE_LOCATION=18421
	I0318 20:29:56.066516   12847 notify.go:220] Checking for updates...
	I0318 20:29:56.070411   12847 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:29:56.071695   12847 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:29:56.072991   12847 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:29:56.074214   12847 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 20:29:56.076470   12847 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 20:29:56.076695   12847 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:29:56.106181   12847 out.go:97] Using the kvm2 driver based on user configuration
	I0318 20:29:56.106199   12847 start.go:297] selected driver: kvm2
	I0318 20:29:56.106204   12847 start.go:901] validating driver "kvm2" against <nil>
	I0318 20:29:56.106513   12847 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:29:56.106588   12847 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:29:56.120106   12847 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:29:56.120146   12847 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 20:29:56.120606   12847 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 20:29:56.120759   12847 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 20:29:56.120845   12847 cni.go:84] Creating CNI manager for ""
	I0318 20:29:56.120858   12847 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 20:29:56.120865   12847 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 20:29:56.120939   12847 start.go:340] cluster config:
	{Name:download-only-820089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-820089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:29:56.121037   12847 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:29:56.122546   12847 out.go:97] Starting "download-only-820089" primary control-plane node in "download-only-820089" cluster
	I0318 20:29:56.122561   12847 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:29:56.227821   12847 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:29:56.227852   12847 cache.go:56] Caching tarball of preloaded images
	I0318 20:29:56.228043   12847 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:29:56.229843   12847 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 20:29:56.229865   12847 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:29:56.335756   12847 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 20:30:09.146234   12847 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:30:09.146327   12847 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:30:10.134038   12847 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 20:30:10.134357   12847 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/download-only-820089/config.json ...
	I0318 20:30:10.134383   12847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/download-only-820089/config.json: {Name:mk8ce320a19f08189037b34458680327f63ce86c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:30:10.134520   12847 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 20:30:10.134650   12847 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-820089 host does not exist
	  To start a cluster, run: "minikube start -p download-only-820089"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-820089
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (51.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-022655 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-022655 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.929431063s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (51.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-022655
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-022655: exit status 85 (69.677198ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-029911 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC |                     |
	|         | -p download-only-029911           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC | 18 Mar 24 20:29 UTC |
	| delete  | -p download-only-029911           | download-only-029911 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC | 18 Mar 24 20:29 UTC |
	| start   | -o=json --download-only           | download-only-820089 | jenkins | v1.32.0 | 18 Mar 24 20:29 UTC |                     |
	|         | -p download-only-820089           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 20:30 UTC | 18 Mar 24 20:30 UTC |
	| delete  | -p download-only-820089           | download-only-820089 | jenkins | v1.32.0 | 18 Mar 24 20:30 UTC | 18 Mar 24 20:30 UTC |
	| start   | -o=json --download-only           | download-only-022655 | jenkins | v1.32.0 | 18 Mar 24 20:30 UTC |                     |
	|         | -p download-only-022655           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 20:30:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 20:30:44.259600   13104 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:30:44.259872   13104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:30:44.259883   13104 out.go:304] Setting ErrFile to fd 2...
	I0318 20:30:44.259887   13104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:30:44.260066   13104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:30:44.260594   13104 out.go:298] Setting JSON to true
	I0318 20:30:44.261445   13104 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":788,"bootTime":1710793056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:30:44.261498   13104 start.go:139] virtualization: kvm guest
	I0318 20:30:44.263791   13104 out.go:97] [download-only-022655] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:30:44.265268   13104 out.go:169] MINIKUBE_LOCATION=18421
	I0318 20:30:44.264012   13104 notify.go:220] Checking for updates...
	I0318 20:30:44.267810   13104 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:30:44.268936   13104 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:30:44.270099   13104 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:30:44.271189   13104 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 20:30:44.273289   13104 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 20:30:44.273503   13104 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:30:44.304796   13104 out.go:97] Using the kvm2 driver based on user configuration
	I0318 20:30:44.304821   13104 start.go:297] selected driver: kvm2
	I0318 20:30:44.304827   13104 start.go:901] validating driver "kvm2" against <nil>
	I0318 20:30:44.305147   13104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:30:44.305227   13104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18421-5321/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 20:30:44.320000   13104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 20:30:44.320048   13104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 20:30:44.320490   13104 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 20:30:44.320617   13104 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 20:30:44.320668   13104 cni.go:84] Creating CNI manager for ""
	I0318 20:30:44.320681   13104 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 20:30:44.320687   13104 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 20:30:44.320738   13104 start.go:340] cluster config:
	{Name:download-only-022655 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-022655 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:30:44.320817   13104 iso.go:125] acquiring lock: {Name:mkee7ff8b19df92fc222c1062e4ab65f944da05d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 20:30:44.322405   13104 out.go:97] Starting "download-only-022655" primary control-plane node in "download-only-022655" cluster
	I0318 20:30:44.322419   13104 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 20:30:44.428835   13104 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 20:30:44.428862   13104 cache.go:56] Caching tarball of preloaded images
	I0318 20:30:44.429121   13104 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 20:30:44.431005   13104 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 20:30:44.431024   13104 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:30:44.540113   13104 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 20:30:59.908092   13104 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:30:59.908175   13104 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18421-5321/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 20:31:00.669319   13104 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 20:31:00.669628   13104 profile.go:142] Saving config to /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/download-only-022655/config.json ...
	I0318 20:31:00.669656   13104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/download-only-022655/config.json: {Name:mke32dd012cc1ddc5e79b742c5b780d532adb6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 20:31:00.669819   13104 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 20:31:00.669958   13104 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18421-5321/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-022655 host does not exist
	  To start a cluster, run: "minikube start -p download-only-022655"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-022655
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-766908 --alsologtostderr --binary-mirror http://127.0.0.1:36299 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-766908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-766908
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (123.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-753209 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-753209 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m2.512624635s)
helpers_test.go:175: Cleaning up "offline-crio-753209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-753209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-753209: (1.051462909s)
--- PASS: TestOffline (123.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-791443
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-791443: exit status 85 (58.898177ms)

                                                
                                                
-- stdout --
	* Profile "addons-791443" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-791443"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-791443
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-791443: exit status 85 (57.257699ms)

                                                
                                                
-- stdout --
	* Profile "addons-791443" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-791443"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (216.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-791443 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-791443 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.681205781s)
--- PASS: TestAddons/Setup (216.68s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 30.069362ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-m9jd7" [b402e103-9225-45b0-811b-bc35d410e2a6] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005017671s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-br298" [17ae91fe-5e22-4c2b-8b5a-9bfb300a1126] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008602904s
addons_test.go:340: (dbg) Run:  kubectl --context addons-791443 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-791443 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-791443 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.403646602s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 ip
2024/03/18 20:35:35 [DEBUG] GET http://192.168.39.131:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 addons disable registry --alsologtostderr -v=1: (1.163081996s)
--- PASS: TestAddons/parallel/Registry (22.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ztddn" [cbdc3a25-9395-4b22-a2da-e574f20aed7d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011763177s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-791443
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-791443: (6.303766178s)
--- PASS: TestAddons/parallel/InspektorGadget (11.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.461029ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-4nzjv" [b7ecdb56-4ae5-4112-9a8b-40564207c8ff] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008766709s
addons_test.go:415: (dbg) Run:  kubectl --context addons-791443 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 addons disable metrics-server --alsologtostderr -v=1: (1.224688965s)
--- PASS: TestAddons/parallel/MetricsServer (6.30s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.622816ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-cksth" [5a800c36-110f-45ae-aabb-2a2089254b00] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005865711s
addons_test.go:473: (dbg) Run:  kubectl --context addons-791443 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-791443 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.044184246s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 addons disable helm-tiller --alsologtostderr -v=1: (1.420768204s)
--- PASS: TestAddons/parallel/HelmTiller (14.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.934986ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-791443 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-791443 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b79e0872-7bdf-4f7f-bd01-30876cbaf1a9] Pending
helpers_test.go:344: "task-pv-pod" [b79e0872-7bdf-4f7f-bd01-30876cbaf1a9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b79e0872-7bdf-4f7f-bd01-30876cbaf1a9] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004152043s
addons_test.go:584: (dbg) Run:  kubectl --context addons-791443 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-791443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-791443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-791443 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-791443 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-791443 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-791443 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [32b926f2-2feb-444d-be39-6a6de49e7229] Pending
helpers_test.go:344: "task-pv-pod-restore" [32b926f2-2feb-444d-be39-6a6de49e7229] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [32b926f2-2feb-444d-be39-6a6de49e7229] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.003903604s
addons_test.go:626: (dbg) Run:  kubectl --context addons-791443 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-791443 delete pod task-pv-pod-restore: (1.374820552s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-791443 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-791443 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-791443 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.856354634s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-791443 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-791443 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-791443 --alsologtostderr -v=1: (1.402944022s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-dv5fw" [1de39436-0b96-4eb9-86be-ee2280b59105] Pending
helpers_test.go:344: "headlamp-5485c556b-dv5fw" [1de39436-0b96-4eb9-86be-ee2280b59105] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-dv5fw" [1de39436-0b96-4eb9-86be-ee2280b59105] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004631124s
--- PASS: TestAddons/parallel/Headlamp (17.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-l2ghv" [b16a9b15-50c8-4b92-8dd6-07097cdc3e8e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005543294s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-791443
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-n5nbn" [45861794-8d7c-49b5-8748-20d3e179d433] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007447048s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-791443
addons_test.go:955: (dbg) Done: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-791443: (1.018325024s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.03s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-nvjkr" [8595c6c7-7fa6-468a-bb41-23e8c560faa3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004886471s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-791443 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-791443 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (60.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-339325 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-339325 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.100595639s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-339325 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-339325 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-339325 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-339325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-339325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-339325: (1.023401195s)
--- PASS: TestCertOptions (60.58s)

                                                
                                    
x
+
TestCertExpiration (417.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-443643 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-443643 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m41.658769393s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-443643 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-443643 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m15.008508443s)
helpers_test.go:175: Cleaning up "cert-expiration-443643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-443643
--- PASS: TestCertExpiration (417.47s)

                                                
                                    
x
+
TestForceSystemdFlag (114.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-803767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-803767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m53.158294717s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-803767 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-803767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-803767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-803767: (1.020165913s)
--- PASS: TestForceSystemdFlag (114.40s)

                                                
                                    
x
+
TestForceSystemdEnv (47.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-800500 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-800500 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.223318906s)
helpers_test.go:175: Cleaning up "force-systemd-env-800500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-800500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-800500: (1.011085516s)
--- PASS: TestForceSystemdEnv (47.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.54s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.54s)

                                                
                                    
x
+
TestErrorSpam/setup (44.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-172534 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-172534 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-172534 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-172534 --driver=kvm2  --container-runtime=crio: (44.777048981s)
--- PASS: TestErrorSpam/setup (44.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (6.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 stop: (2.313981823s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 stop: (1.985120888s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-172534 --log_dir /tmp/nospam-172534 stop: (2.031117569s)
--- PASS: TestErrorSpam/stop (6.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18421-5321/.minikube/files/etc/test/nested/copy/12568/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882018 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-882018 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.675604587s)
--- PASS: TestFunctional/serial/StartWithProxy (97.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882018 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-882018 --alsologtostderr -v=8: (42.377432397s)
functional_test.go:659: soft start took 42.378079801s for "functional-882018" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-882018 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 cache add registry.k8s.io/pause:3.1: (1.02592609s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 cache add registry.k8s.io/pause:3.3: (1.16906055s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 cache add registry.k8s.io/pause:latest: (1.038090952s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-882018 /tmp/TestFunctionalserialCacheCmdcacheadd_local223140852/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cache add minikube-local-cache-test:functional-882018
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 cache add minikube-local-cache-test:functional-882018: (1.938000179s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cache delete minikube-local-cache-test:functional-882018
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-882018
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.134377ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 kubectl -- --context functional-882018 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-882018 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0318 20:45:14.157958   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.163575   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.173778   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.194014   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.234256   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.314557   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.474937   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:45:14.795511   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-882018 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.469765582s)
functional_test.go:757: restart took 32.469854533s for "functional-882018" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-882018 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 logs
E0318 20:45:15.436245   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 logs: (1.459141937s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 logs --file /tmp/TestFunctionalserialLogsFileCmd2031488604/001/logs.txt
E0318 20:45:16.716346   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 logs --file /tmp/TestFunctionalserialLogsFileCmd2031488604/001/logs.txt: (1.539241269s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-882018 apply -f testdata/invalidsvc.yaml
E0318 20:45:19.277033   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-882018
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-882018: exit status 115 (279.039665ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.130:31356 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-882018 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-882018 delete -f testdata/invalidsvc.yaml: (1.283174892s)
--- PASS: TestFunctional/serial/InvalidService (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 config get cpus: exit status 14 (79.66018ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 config get cpus: exit status 14 (52.452536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882018 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-882018 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19717: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.144689ms)

                                                
                                                
-- stdout --
	* [functional-882018] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:45:26.060755   19618 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:45:26.061313   19618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:45:26.061336   19618 out.go:304] Setting ErrFile to fd 2...
	I0318 20:45:26.061343   19618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:45:26.061801   19618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:45:26.062651   19618 out.go:298] Setting JSON to false
	I0318 20:45:26.063946   19618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1670,"bootTime":1710793056,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:45:26.064024   19618 start.go:139] virtualization: kvm guest
	I0318 20:45:26.065995   19618 out.go:177] * [functional-882018] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 20:45:26.067849   19618 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:45:26.069364   19618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:45:26.067878   19618 notify.go:220] Checking for updates...
	I0318 20:45:26.070743   19618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:45:26.072028   19618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:45:26.073302   19618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:45:26.074524   19618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:45:26.076096   19618 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:45:26.076486   19618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:45:26.076530   19618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:45:26.093783   19618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0318 20:45:26.094302   19618 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:45:26.094932   19618 main.go:141] libmachine: Using API Version  1
	I0318 20:45:26.094963   19618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:45:26.095395   19618 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:45:26.095585   19618 main.go:141] libmachine: (functional-882018) Calling .DriverName
	I0318 20:45:26.095856   19618 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:45:26.096239   19618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:45:26.096278   19618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:45:26.111949   19618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0318 20:45:26.112310   19618 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:45:26.112799   19618 main.go:141] libmachine: Using API Version  1
	I0318 20:45:26.112827   19618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:45:26.113199   19618 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:45:26.113388   19618 main.go:141] libmachine: (functional-882018) Calling .DriverName
	I0318 20:45:26.146148   19618 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 20:45:26.147322   19618 start.go:297] selected driver: kvm2
	I0318 20:45:26.147335   19618 start.go:901] validating driver "kvm2" against &{Name:functional-882018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-882018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.130 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:45:26.147444   19618 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:45:26.149590   19618 out.go:177] 
	W0318 20:45:26.150878   19618 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 20:45:26.152119   19618 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882018 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-882018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-882018 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.443653ms)

                                                
                                                
-- stdout --
	* [functional-882018] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 20:45:25.914223   19573 out.go:291] Setting OutFile to fd 1 ...
	I0318 20:45:25.914313   19573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:45:25.914318   19573 out.go:304] Setting ErrFile to fd 2...
	I0318 20:45:25.914325   19573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 20:45:25.914602   19573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 20:45:25.915074   19573 out.go:298] Setting JSON to false
	I0318 20:45:25.915953   19573 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1670,"bootTime":1710793056,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 20:45:25.916011   19573 start.go:139] virtualization: kvm guest
	I0318 20:45:25.918417   19573 out.go:177] * [functional-882018] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0318 20:45:25.919829   19573 notify.go:220] Checking for updates...
	I0318 20:45:25.921419   19573 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 20:45:25.922849   19573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 20:45:25.924182   19573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 20:45:25.925706   19573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 20:45:25.927051   19573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 20:45:25.928438   19573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 20:45:25.930364   19573 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 20:45:25.930913   19573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:45:25.930967   19573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:45:25.945316   19573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I0318 20:45:25.945740   19573 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:45:25.946214   19573 main.go:141] libmachine: Using API Version  1
	I0318 20:45:25.946238   19573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:45:25.946557   19573 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:45:25.946732   19573 main.go:141] libmachine: (functional-882018) Calling .DriverName
	I0318 20:45:25.946990   19573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 20:45:25.947276   19573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 20:45:25.947323   19573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 20:45:25.961782   19573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35405
	I0318 20:45:25.962212   19573 main.go:141] libmachine: () Calling .GetVersion
	I0318 20:45:25.962697   19573 main.go:141] libmachine: Using API Version  1
	I0318 20:45:25.962720   19573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 20:45:25.963037   19573 main.go:141] libmachine: () Calling .GetMachineName
	I0318 20:45:25.963160   19573 main.go:141] libmachine: (functional-882018) Calling .DriverName
	I0318 20:45:25.994006   19573 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0318 20:45:25.995436   19573 start.go:297] selected driver: kvm2
	I0318 20:45:25.995447   19573 start.go:901] validating driver "kvm2" against &{Name:functional-882018 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-882018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.130 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 20:45:25.995562   19573 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 20:45:25.997664   19573 out.go:177] 
	W0318 20:45:25.998926   19573 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 20:45:26.000227   19573 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-882018 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-882018 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-4nckj" [3719dcd5-5d54-4a8a-b4c8-87db738d1d0f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-4nckj" [3719dcd5-5d54-4a8a-b4c8-87db738d1d0f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.006031887s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.130:31891
functional_test.go:1671: http://192.168.39.130:31891: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-4nckj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.130:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.130:31891
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9d34e0a0-604a-46db-9494-0331fbbe580a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005980102s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-882018 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-882018 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-882018 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-882018 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [85323759-ebcb-468d-bac0-13a2260a212d] Pending
helpers_test.go:344: "sp-pod" [85323759-ebcb-468d-bac0-13a2260a212d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [85323759-ebcb-468d-bac0-13a2260a212d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.00423799s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-882018 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-882018 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-882018 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5dd06c57-db04-4dbf-81ee-2eac967e99ca] Pending
helpers_test.go:344: "sp-pod" [5dd06c57-db04-4dbf-81ee-2eac967e99ca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5dd06c57-db04-4dbf-81ee-2eac967e99ca] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007857448s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-882018 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh -n functional-882018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cp functional-882018:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1070085137/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh -n functional-882018 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh -n functional-882018 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-882018 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-nnpfc" [5fe9b74e-ddb2-48c4-b5e4-ad5d2129ce54] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-nnpfc" [5fe9b74e-ddb2-48c4-b5e4-ad5d2129ce54] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.010019718s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;": exit status 1 (348.77798ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;": exit status 1 (215.454107ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;": exit status 1 (354.978093ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-882018 exec mysql-859648c796-nnpfc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12568/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /etc/test/nested/copy/12568/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12568.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /etc/ssl/certs/12568.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12568.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /usr/share/ca-certificates/12568.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/125682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /etc/ssl/certs/125682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/125682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /usr/share/ca-certificates/125682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0318 20:45:24.397724   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-882018 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh "sudo systemctl is-active docker": exit status 1 (281.396846ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh "sudo systemctl is-active containerd": exit status 1 (296.85266ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-882018 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-882018 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6r7ks" [1b8b7ff0-7c9c-4339-be10-ff32deafa063] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6r7ks" [1b8b7ff0-7c9c-4339-be10-ff32deafa063] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.009675425s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "273.12556ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.219146ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "380.366779ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "65.966737ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdany-port2011586334/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710794724636217060" to /tmp/TestFunctionalparallelMountCmdany-port2011586334/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710794724636217060" to /tmp/TestFunctionalparallelMountCmdany-port2011586334/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710794724636217060" to /tmp/TestFunctionalparallelMountCmdany-port2011586334/001/test-1710794724636217060
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.856788ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 18 20:45 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 18 20:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 18 20:45 test-1710794724636217060
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh cat /mount-9p/test-1710794724636217060
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-882018 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [62a2e875-86e5-4cfa-92e1-93f0e1a4ffd9] Pending
helpers_test.go:344: "busybox-mount" [62a2e875-86e5-4cfa-92e1-93f0e1a4ffd9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [62a2e875-86e5-4cfa-92e1-93f0e1a4ffd9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [62a2e875-86e5-4cfa-92e1-93f0e1a4ffd9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.007843863s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-882018 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh stat /mount-9p/created-by-pod
E0318 20:45:34.638183   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdany-port2011586334/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882018 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-882018
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-882018
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882018 image ls --format short --alsologtostderr:
I0318 20:46:08.762799   21348 out.go:291] Setting OutFile to fd 1 ...
I0318 20:46:08.765044   21348 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:08.765060   21348 out.go:304] Setting ErrFile to fd 2...
I0318 20:46:08.765067   21348 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:08.765382   21348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
I0318 20:46:08.766039   21348 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:08.766164   21348 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:08.766669   21348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:08.766705   21348 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:08.780313   21348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
I0318 20:46:08.780869   21348 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:08.781443   21348 main.go:141] libmachine: Using API Version  1
I0318 20:46:08.781463   21348 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:08.781839   21348 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:08.782044   21348 main.go:141] libmachine: (functional-882018) Calling .GetState
I0318 20:46:08.783891   21348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:08.783919   21348 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:08.796958   21348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
I0318 20:46:08.797503   21348 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:08.798066   21348 main.go:141] libmachine: Using API Version  1
I0318 20:46:08.798081   21348 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:08.798394   21348 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:08.798564   21348 main.go:141] libmachine: (functional-882018) Calling .DriverName
I0318 20:46:08.798747   21348 ssh_runner.go:195] Run: systemctl --version
I0318 20:46:08.798767   21348 main.go:141] libmachine: (functional-882018) Calling .GetSSHHostname
I0318 20:46:08.801580   21348 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:08.802169   21348 main.go:141] libmachine: (functional-882018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:e4:3f", ip: ""} in network mk-functional-882018: {Iface:virbr1 ExpiryTime:2024-03-18 21:42:30 +0000 UTC Type:0 Mac:52:54:00:bb:e4:3f Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:functional-882018 Clientid:01:52:54:00:bb:e4:3f}
I0318 20:46:08.802194   21348 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined IP address 192.168.39.130 and MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:08.802425   21348 main.go:141] libmachine: (functional-882018) Calling .GetSSHPort
I0318 20:46:08.802605   21348 main.go:141] libmachine: (functional-882018) Calling .GetSSHKeyPath
I0318 20:46:08.802747   21348 main.go:141] libmachine: (functional-882018) Calling .GetSSHUsername
I0318 20:46:08.802885   21348 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/functional-882018/id_rsa Username:docker}
I0318 20:46:08.896202   21348 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 20:46:08.975382   21348 main.go:141] libmachine: Making call to close driver server
I0318 20:46:08.975398   21348 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:08.975666   21348 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:08.975680   21348 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 20:46:08.975688   21348 main.go:141] libmachine: Making call to close driver server
I0318 20:46:08.975698   21348 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:08.975925   21348 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:08.975946   21348 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 20:46:08.975968   21348 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882018 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-882018  | a76bfc1ce9b0a | 3.35kB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer  | functional-882018  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882018 image ls --format table --alsologtostderr:
I0318 20:46:09.036470   21427 out.go:291] Setting OutFile to fd 1 ...
I0318 20:46:09.036567   21427 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:09.036579   21427 out.go:304] Setting ErrFile to fd 2...
I0318 20:46:09.036583   21427 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:09.036767   21427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
I0318 20:46:09.037410   21427 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:09.037562   21427 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:09.038137   21427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:09.038183   21427 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:09.051325   21427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
I0318 20:46:09.051746   21427 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:09.052274   21427 main.go:141] libmachine: Using API Version  1
I0318 20:46:09.052293   21427 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:09.052630   21427 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:09.052808   21427 main.go:141] libmachine: (functional-882018) Calling .GetState
I0318 20:46:09.055021   21427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:09.055060   21427 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:09.069633   21427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
I0318 20:46:09.070052   21427 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:09.070476   21427 main.go:141] libmachine: Using API Version  1
I0318 20:46:09.070498   21427 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:09.070869   21427 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:09.071049   21427 main.go:141] libmachine: (functional-882018) Calling .DriverName
I0318 20:46:09.071244   21427 ssh_runner.go:195] Run: systemctl --version
I0318 20:46:09.071270   21427 main.go:141] libmachine: (functional-882018) Calling .GetSSHHostname
I0318 20:46:09.073954   21427 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:09.074337   21427 main.go:141] libmachine: (functional-882018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:e4:3f", ip: ""} in network mk-functional-882018: {Iface:virbr1 ExpiryTime:2024-03-18 21:42:30 +0000 UTC Type:0 Mac:52:54:00:bb:e4:3f Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:functional-882018 Clientid:01:52:54:00:bb:e4:3f}
I0318 20:46:09.074375   21427 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined IP address 192.168.39.130 and MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:09.074485   21427 main.go:141] libmachine: (functional-882018) Calling .GetSSHPort
I0318 20:46:09.074642   21427 main.go:141] libmachine: (functional-882018) Calling .GetSSHKeyPath
I0318 20:46:09.074796   21427 main.go:141] libmachine: (functional-882018) Calling .GetSSHUsername
I0318 20:46:09.074894   21427 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/functional-882018/id_rsa Username:docker}
I0318 20:46:09.164423   21427 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 20:46:09.230588   21427 main.go:141] libmachine: Making call to close driver server
I0318 20:46:09.230601   21427 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:09.230797   21427 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:09.230819   21427 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 20:46:09.230833   21427 main.go:141] libmachine: Making call to close driver server
I0318 20:46:09.230840   21427 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:09.230845   21427 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
I0318 20:46:09.231027   21427 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:09.231040   21427 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882018 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206
e98765c"],"repoTags":[],"size":"43824855"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-882018"],"size":"34114467"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.
4"],"size":"74749335"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"siz
e":"97846543"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f
4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a76bfc1ce9b0a44103f4b3d59d6ea67cd3d077d7b5176b3f789026edf56b7fd7","repoDigests":["localhost/minikube-local-cache-test@sha256:d7d4f6e879f9057ae3763086988c2c43bfb20e5e6af7f654b9dd4de3485958c2"],"repoTags":["localhost/minikube-local-cache-test:functional-882018"],"size":"3345"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registr
y.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/co
redns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882018 image ls --format json --alsologtostderr:
I0318 20:46:08.763345   21349 out.go:291] Setting OutFile to fd 1 ...
I0318 20:46:08.763529   21349 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:08.763557   21349 out.go:304] Setting ErrFile to fd 2...
I0318 20:46:08.763573   21349 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:08.764002   21349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
I0318 20:46:08.765452   21349 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:08.765627   21349 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:08.766002   21349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:08.766051   21349 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:08.779088   21349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
I0318 20:46:08.779574   21349 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:08.780182   21349 main.go:141] libmachine: Using API Version  1
I0318 20:46:08.780209   21349 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:08.780576   21349 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:08.780761   21349 main.go:141] libmachine: (functional-882018) Calling .GetState
I0318 20:46:08.782823   21349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:08.782867   21349 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:08.796500   21349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
I0318 20:46:08.796892   21349 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:08.797342   21349 main.go:141] libmachine: Using API Version  1
I0318 20:46:08.797361   21349 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:08.797853   21349 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:08.797994   21349 main.go:141] libmachine: (functional-882018) Calling .DriverName
I0318 20:46:08.798132   21349 ssh_runner.go:195] Run: systemctl --version
I0318 20:46:08.798165   21349 main.go:141] libmachine: (functional-882018) Calling .GetSSHHostname
I0318 20:46:08.800916   21349 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:08.801290   21349 main.go:141] libmachine: (functional-882018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:e4:3f", ip: ""} in network mk-functional-882018: {Iface:virbr1 ExpiryTime:2024-03-18 21:42:30 +0000 UTC Type:0 Mac:52:54:00:bb:e4:3f Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:functional-882018 Clientid:01:52:54:00:bb:e4:3f}
I0318 20:46:08.801316   21349 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined IP address 192.168.39.130 and MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:08.801451   21349 main.go:141] libmachine: (functional-882018) Calling .GetSSHPort
I0318 20:46:08.801779   21349 main.go:141] libmachine: (functional-882018) Calling .GetSSHKeyPath
I0318 20:46:08.801912   21349 main.go:141] libmachine: (functional-882018) Calling .GetSSHUsername
I0318 20:46:08.802011   21349 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/functional-882018/id_rsa Username:docker}
I0318 20:46:08.907681   21349 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 20:46:08.983766   21349 main.go:141] libmachine: Making call to close driver server
I0318 20:46:08.983781   21349 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:08.984007   21349 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:08.984023   21349 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 20:46:08.984029   21349 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
I0318 20:46:08.984035   21349 main.go:141] libmachine: Making call to close driver server
I0318 20:46:08.984044   21349 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:08.984254   21349 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
I0318 20:46:08.984286   21349 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:08.984299   21349 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882018 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a76bfc1ce9b0a44103f4b3d59d6ea67cd3d077d7b5176b3f789026edf56b7fd7
repoDigests:
- localhost/minikube-local-cache-test@sha256:d7d4f6e879f9057ae3763086988c2c43bfb20e5e6af7f654b9dd4de3485958c2
repoTags:
- localhost/minikube-local-cache-test:functional-882018
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-882018
size: "34114467"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882018 image ls --format yaml --alsologtostderr:
I0318 20:46:08.758639   21347 out.go:291] Setting OutFile to fd 1 ...
I0318 20:46:08.758777   21347 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:08.758807   21347 out.go:304] Setting ErrFile to fd 2...
I0318 20:46:08.758815   21347 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:08.761468   21347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
I0318 20:46:08.762260   21347 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:08.762354   21347 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:08.762832   21347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:08.762874   21347 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:08.777402   21347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
I0318 20:46:08.777853   21347 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:08.778377   21347 main.go:141] libmachine: Using API Version  1
I0318 20:46:08.778395   21347 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:08.778856   21347 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:08.779007   21347 main.go:141] libmachine: (functional-882018) Calling .GetState
I0318 20:46:08.781017   21347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:08.781048   21347 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:08.794854   21347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38471
I0318 20:46:08.795174   21347 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:08.795640   21347 main.go:141] libmachine: Using API Version  1
I0318 20:46:08.795660   21347 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:08.795940   21347 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:08.796122   21347 main.go:141] libmachine: (functional-882018) Calling .DriverName
I0318 20:46:08.796286   21347 ssh_runner.go:195] Run: systemctl --version
I0318 20:46:08.796315   21347 main.go:141] libmachine: (functional-882018) Calling .GetSSHHostname
I0318 20:46:08.799750   21347 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:08.800100   21347 main.go:141] libmachine: (functional-882018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:e4:3f", ip: ""} in network mk-functional-882018: {Iface:virbr1 ExpiryTime:2024-03-18 21:42:30 +0000 UTC Type:0 Mac:52:54:00:bb:e4:3f Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:functional-882018 Clientid:01:52:54:00:bb:e4:3f}
I0318 20:46:08.800116   21347 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined IP address 192.168.39.130 and MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:08.800659   21347 main.go:141] libmachine: (functional-882018) Calling .GetSSHPort
I0318 20:46:08.800823   21347 main.go:141] libmachine: (functional-882018) Calling .GetSSHKeyPath
I0318 20:46:08.801159   21347 main.go:141] libmachine: (functional-882018) Calling .GetSSHUsername
I0318 20:46:08.801291   21347 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/functional-882018/id_rsa Username:docker}
I0318 20:46:08.883756   21347 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 20:46:08.937508   21347 main.go:141] libmachine: Making call to close driver server
I0318 20:46:08.937519   21347 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:08.937783   21347 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
I0318 20:46:08.937832   21347 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:08.937865   21347 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 20:46:08.937875   21347 main.go:141] libmachine: Making call to close driver server
I0318 20:46:08.937886   21347 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:08.938137   21347 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
I0318 20:46:08.938165   21347 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:08.938185   21347 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh pgrep buildkitd: exit status 1 (203.606909ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image build -t localhost/my-image:functional-882018 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image build -t localhost/my-image:functional-882018 testdata/build --alsologtostderr: (3.050913778s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-882018 image build -t localhost/my-image:functional-882018 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7fe0366b673
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-882018
--> 6463c029586
Successfully tagged localhost/my-image:functional-882018
6463c029586b8ff45f0eb5df016600ace6e25f1597aa799bf179c35795fdfc40
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-882018 image build -t localhost/my-image:functional-882018 testdata/build --alsologtostderr:
I0318 20:46:09.200304   21468 out.go:291] Setting OutFile to fd 1 ...
I0318 20:46:09.200438   21468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:09.200447   21468 out.go:304] Setting ErrFile to fd 2...
I0318 20:46:09.200451   21468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 20:46:09.200639   21468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
I0318 20:46:09.201188   21468 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:09.201711   21468 config.go:182] Loaded profile config "functional-882018": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 20:46:09.202099   21468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:09.202132   21468 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:09.216082   21468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
I0318 20:46:09.216561   21468 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:09.217091   21468 main.go:141] libmachine: Using API Version  1
I0318 20:46:09.217110   21468 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:09.217421   21468 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:09.217593   21468 main.go:141] libmachine: (functional-882018) Calling .GetState
I0318 20:46:09.219361   21468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 20:46:09.219393   21468 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 20:46:09.234345   21468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
I0318 20:46:09.234734   21468 main.go:141] libmachine: () Calling .GetVersion
I0318 20:46:09.235205   21468 main.go:141] libmachine: Using API Version  1
I0318 20:46:09.235226   21468 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 20:46:09.235590   21468 main.go:141] libmachine: () Calling .GetMachineName
I0318 20:46:09.235768   21468 main.go:141] libmachine: (functional-882018) Calling .DriverName
I0318 20:46:09.235993   21468 ssh_runner.go:195] Run: systemctl --version
I0318 20:46:09.236015   21468 main.go:141] libmachine: (functional-882018) Calling .GetSSHHostname
I0318 20:46:09.238435   21468 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:09.238816   21468 main.go:141] libmachine: (functional-882018) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:e4:3f", ip: ""} in network mk-functional-882018: {Iface:virbr1 ExpiryTime:2024-03-18 21:42:30 +0000 UTC Type:0 Mac:52:54:00:bb:e4:3f Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:functional-882018 Clientid:01:52:54:00:bb:e4:3f}
I0318 20:46:09.238876   21468 main.go:141] libmachine: (functional-882018) DBG | domain functional-882018 has defined IP address 192.168.39.130 and MAC address 52:54:00:bb:e4:3f in network mk-functional-882018
I0318 20:46:09.239090   21468 main.go:141] libmachine: (functional-882018) Calling .GetSSHPort
I0318 20:46:09.239266   21468 main.go:141] libmachine: (functional-882018) Calling .GetSSHKeyPath
I0318 20:46:09.239399   21468 main.go:141] libmachine: (functional-882018) Calling .GetSSHUsername
I0318 20:46:09.239547   21468 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/functional-882018/id_rsa Username:docker}
I0318 20:46:09.324401   21468 build_images.go:161] Building image from path: /tmp/build.1195504245.tar
I0318 20:46:09.324451   21468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 20:46:09.340858   21468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1195504245.tar
I0318 20:46:09.346136   21468 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1195504245.tar: stat -c "%s %y" /var/lib/minikube/build/build.1195504245.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1195504245.tar': No such file or directory
I0318 20:46:09.346157   21468 ssh_runner.go:362] scp /tmp/build.1195504245.tar --> /var/lib/minikube/build/build.1195504245.tar (3072 bytes)
I0318 20:46:09.377562   21468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1195504245
I0318 20:46:09.390293   21468 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1195504245 -xf /var/lib/minikube/build/build.1195504245.tar
I0318 20:46:09.402813   21468 crio.go:315] Building image: /var/lib/minikube/build/build.1195504245
I0318 20:46:09.402921   21468 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-882018 /var/lib/minikube/build/build.1195504245 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0318 20:46:12.169735   21468 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-882018 /var/lib/minikube/build/build.1195504245 --cgroup-manager=cgroupfs: (2.766778062s)
I0318 20:46:12.169813   21468 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1195504245
I0318 20:46:12.182744   21468 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1195504245.tar
I0318 20:46:12.194611   21468 build_images.go:217] Built localhost/my-image:functional-882018 from /tmp/build.1195504245.tar
I0318 20:46:12.194642   21468 build_images.go:133] succeeded building to: functional-882018
I0318 20:46:12.194646   21468 build_images.go:134] failed building to: 
I0318 20:46:12.194670   21468 main.go:141] libmachine: Making call to close driver server
I0318 20:46:12.194680   21468 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:12.194964   21468 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:12.194983   21468 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 20:46:12.194988   21468 main.go:141] libmachine: (functional-882018) DBG | Closing plugin on server side
I0318 20:46:12.194992   21468 main.go:141] libmachine: Making call to close driver server
I0318 20:46:12.195000   21468 main.go:141] libmachine: (functional-882018) Calling .Close
I0318 20:46:12.195223   21468 main.go:141] libmachine: Successfully made call to close driver server
I0318 20:46:12.195239   21468 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.005211607s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-882018
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdspecific-port232924837/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.821331ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdspecific-port232924837/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh "sudo umount -f /mount-9p": exit status 1 (236.97308ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-882018 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdspecific-port232924837/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 service list -o json
functional_test.go:1490: Took "838.843869ms" to run "out/minikube-linux-amd64 -p functional-882018 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.130:30732
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image load --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image load --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr: (7.151741357s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220706046/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220706046/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220706046/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T" /mount1: exit status 1 (336.191735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-882018 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220706046/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220706046/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-882018 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220706046/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.130:30732
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 version -o=json --components: (1.000275242s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image load --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image load --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr: (2.699283433s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2024/03/18 20:45:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.990305745s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-882018
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image load --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image load --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr: (9.817599493s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 update-context --alsologtostderr -v=2
E0318 20:45:55.119311   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image save gcr.io/google-containers/addon-resizer:functional-882018 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image save gcr.io/google-containers/addon-resizer:functional-882018 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.844450121s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image rm gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.187667444s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-882018
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-882018 image save --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-882018 image save --daemon gcr.io/google-containers/addon-resizer:functional-882018 --alsologtostderr: (1.310291901s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-882018
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-882018
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-882018
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-882018
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (300.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-315064 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 20:46:36.080231   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:47:58.001044   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:50:14.157986   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:50:23.236711   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.241988   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.252250   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.272523   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.312818   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.393157   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.553544   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:23.874076   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:24.514250   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:25.795446   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:28.355985   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:33.476864   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:50:41.842289   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 20:50:43.717054   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 20:51:04.198153   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-315064 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m59.937815814s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (300.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-315064 -- rollout status deployment/busybox: (4.331237019s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-5hmqj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-7z7sj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-c7lzc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-5hmqj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-7z7sj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-c7lzc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-5hmqj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-7z7sj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-c7lzc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-5hmqj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-5hmqj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-7z7sj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-7z7sj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-c7lzc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-315064 -- exec busybox-5b5d89c9d6-c7lzc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-315064 -v=7 --alsologtostderr
E0318 20:51:45.159162   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-315064 -v=7 --alsologtostderr: (46.592798508s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-315064 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp testdata/cp-test.txt ha-315064:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064:/home/docker/cp-test.txt ha-315064-m02:/home/docker/cp-test_ha-315064_ha-315064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test_ha-315064_ha-315064-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064:/home/docker/cp-test.txt ha-315064-m03:/home/docker/cp-test_ha-315064_ha-315064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test_ha-315064_ha-315064-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064:/home/docker/cp-test.txt ha-315064-m04:/home/docker/cp-test_ha-315064_ha-315064-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test_ha-315064_ha-315064-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp testdata/cp-test.txt ha-315064-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m02:/home/docker/cp-test.txt ha-315064:/home/docker/cp-test_ha-315064-m02_ha-315064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test_ha-315064-m02_ha-315064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m02:/home/docker/cp-test.txt ha-315064-m03:/home/docker/cp-test_ha-315064-m02_ha-315064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test_ha-315064-m02_ha-315064-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m02:/home/docker/cp-test.txt ha-315064-m04:/home/docker/cp-test_ha-315064-m02_ha-315064-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test_ha-315064-m02_ha-315064-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp testdata/cp-test.txt ha-315064-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt ha-315064:/home/docker/cp-test_ha-315064-m03_ha-315064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test_ha-315064-m03_ha-315064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt ha-315064-m02:/home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test_ha-315064-m03_ha-315064-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m03:/home/docker/cp-test.txt ha-315064-m04:/home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test_ha-315064-m03_ha-315064-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp testdata/cp-test.txt ha-315064-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile954184052/001/cp-test_ha-315064-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt ha-315064:/home/docker/cp-test_ha-315064-m04_ha-315064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064 "sudo cat /home/docker/cp-test_ha-315064-m04_ha-315064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt ha-315064-m02:/home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m02 "sudo cat /home/docker/cp-test_ha-315064-m04_ha-315064-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 cp ha-315064-m04:/home/docker/cp-test.txt ha-315064-m03:/home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 ssh -n ha-315064-m03 "sudo cat /home/docker/cp-test_ha-315064-m04_ha-315064-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.490680678s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-315064 node delete m03 -v=7 --alsologtostderr: (16.617817939s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (264.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-315064 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 21:06:46.281209   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-315064 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m23.792065809s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (264.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (89.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-315064 --control-plane -v=7 --alsologtostderr
E0318 21:10:14.157704   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:10:23.236322   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-315064 --control-plane -v=7 --alsologtostderr: (1m28.542664918s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-315064 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (89.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-646050 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-646050 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.292759317s)
--- PASS: TestJSONOutput/start/Command (89.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-646050 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-646050 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-646050 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-646050 --output=json --user=testUser: (7.465909813s)
--- PASS: TestJSONOutput/stop/Command (7.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-415413 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-415413 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.311161ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"061851b1-06ec-4c9f-adc8-d7f64eadbab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-415413] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"250729b4-a326-45ea-8a79-2bc22ac4d387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18421"}}
	{"specversion":"1.0","id":"5aee0445-f02e-47af-a817-cd6f370c80aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"08fedec6-919e-4ce0-a083-b74fe3e2ed25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig"}}
	{"specversion":"1.0","id":"9970a840-9c34-43c7-b896-e0ed0da92ca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube"}}
	{"specversion":"1.0","id":"60115485-63f7-40d2-a8db-50a50a074311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"eb5afa1f-22d2-4214-ad21-06ed2bfcdee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"41ccb821-cf76-490c-9cf5-7010f35bbfe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-415413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-415413
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-832454 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-832454 --driver=kvm2  --container-runtime=crio: (44.641068875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-835323 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-835323 --driver=kvm2  --container-runtime=crio: (44.863728078s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-832454
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-835323
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-835323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-835323
helpers_test.go:175: Cleaning up "first-832454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-832454
--- PASS: TestMinikubeProfile (92.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-780432 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-780432 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.873423415s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-780432 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-780432 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-792611 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0318 21:15:14.157972   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:15:23.236550   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-792611 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.565210001s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792611 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792611 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-780432 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792611 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792611 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-792611
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-792611: (1.410642466s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-792611
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-792611: (22.090795099s)
--- PASS: TestMountStart/serial/RestartStopped (23.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792611 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-792611 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119391 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119391 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.481906463s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-119391 -- rollout status deployment/busybox: (4.883634807s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-dr5bb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-w6n2g -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-dr5bb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-w6n2g -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-dr5bb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-w6n2g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-dr5bb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-dr5bb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-w6n2g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119391 -- exec busybox-5b5d89c9d6-w6n2g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-119391 -v 3 --alsologtostderr
E0318 21:18:17.203835   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-119391 -v 3 --alsologtostderr: (40.066691073s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-119391 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp testdata/cp-test.txt multinode-119391:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2904894668/001/cp-test_multinode-119391.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391:/home/docker/cp-test.txt multinode-119391-m02:/home/docker/cp-test_multinode-119391_multinode-119391-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m02 "sudo cat /home/docker/cp-test_multinode-119391_multinode-119391-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391:/home/docker/cp-test.txt multinode-119391-m03:/home/docker/cp-test_multinode-119391_multinode-119391-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m03 "sudo cat /home/docker/cp-test_multinode-119391_multinode-119391-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp testdata/cp-test.txt multinode-119391-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2904894668/001/cp-test_multinode-119391-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt multinode-119391:/home/docker/cp-test_multinode-119391-m02_multinode-119391.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391 "sudo cat /home/docker/cp-test_multinode-119391-m02_multinode-119391.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391-m02:/home/docker/cp-test.txt multinode-119391-m03:/home/docker/cp-test_multinode-119391-m02_multinode-119391-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m03 "sudo cat /home/docker/cp-test_multinode-119391-m02_multinode-119391-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp testdata/cp-test.txt multinode-119391-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2904894668/001/cp-test_multinode-119391-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt multinode-119391:/home/docker/cp-test_multinode-119391-m03_multinode-119391.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391 "sudo cat /home/docker/cp-test_multinode-119391-m03_multinode-119391.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 cp multinode-119391-m03:/home/docker/cp-test.txt multinode-119391-m02:/home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 ssh -n multinode-119391-m02 "sudo cat /home/docker/cp-test_multinode-119391-m03_multinode-119391-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-119391 node stop m03: (2.289929432s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119391 status: exit status 7 (434.227258ms)

                                                
                                                
-- stdout --
	multinode-119391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119391-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-119391-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119391 status --alsologtostderr: exit status 7 (433.074053ms)

                                                
                                                
-- stdout --
	multinode-119391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119391-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-119391-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:18:50.594083   37388 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:18:50.594214   37388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:18:50.594227   37388 out.go:304] Setting ErrFile to fd 2...
	I0318 21:18:50.594234   37388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:18:50.594430   37388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:18:50.594582   37388 out.go:298] Setting JSON to false
	I0318 21:18:50.594604   37388 mustload.go:65] Loading cluster: multinode-119391
	I0318 21:18:50.594729   37388 notify.go:220] Checking for updates...
	I0318 21:18:50.595097   37388 config.go:182] Loaded profile config "multinode-119391": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:18:50.595117   37388 status.go:255] checking status of multinode-119391 ...
	I0318 21:18:50.595609   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.595656   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.615207   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I0318 21:18:50.615578   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.616114   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.616136   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.616485   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.616689   37388 main.go:141] libmachine: (multinode-119391) Calling .GetState
	I0318 21:18:50.618101   37388 status.go:330] multinode-119391 host status = "Running" (err=<nil>)
	I0318 21:18:50.618120   37388 host.go:66] Checking if "multinode-119391" exists ...
	I0318 21:18:50.618500   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.618545   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.634367   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0318 21:18:50.634731   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.635140   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.635170   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.635467   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.635677   37388 main.go:141] libmachine: (multinode-119391) Calling .GetIP
	I0318 21:18:50.638297   37388 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:18:50.638639   37388 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:18:50.638668   37388 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:18:50.638778   37388 host.go:66] Checking if "multinode-119391" exists ...
	I0318 21:18:50.639163   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.639210   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.653832   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0318 21:18:50.654169   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.654628   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.654647   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.654953   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.655150   37388 main.go:141] libmachine: (multinode-119391) Calling .DriverName
	I0318 21:18:50.655365   37388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 21:18:50.655384   37388 main.go:141] libmachine: (multinode-119391) Calling .GetSSHHostname
	I0318 21:18:50.657603   37388 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:18:50.657959   37388 main.go:141] libmachine: (multinode-119391) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:8b:23", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:16:19 +0000 UTC Type:0 Mac:52:54:00:1b:8b:23 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-119391 Clientid:01:52:54:00:1b:8b:23}
	I0318 21:18:50.657979   37388 main.go:141] libmachine: (multinode-119391) DBG | domain multinode-119391 has defined IP address 192.168.39.127 and MAC address 52:54:00:1b:8b:23 in network mk-multinode-119391
	I0318 21:18:50.658143   37388 main.go:141] libmachine: (multinode-119391) Calling .GetSSHPort
	I0318 21:18:50.658310   37388 main.go:141] libmachine: (multinode-119391) Calling .GetSSHKeyPath
	I0318 21:18:50.658457   37388 main.go:141] libmachine: (multinode-119391) Calling .GetSSHUsername
	I0318 21:18:50.658569   37388 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391/id_rsa Username:docker}
	I0318 21:18:50.743147   37388 ssh_runner.go:195] Run: systemctl --version
	I0318 21:18:50.750342   37388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:18:50.766553   37388 kubeconfig.go:125] found "multinode-119391" server: "https://192.168.39.127:8443"
	I0318 21:18:50.766577   37388 api_server.go:166] Checking apiserver status ...
	I0318 21:18:50.766602   37388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 21:18:50.782443   37388 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0318 21:18:50.792885   37388 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 21:18:50.792947   37388 ssh_runner.go:195] Run: ls
	I0318 21:18:50.798486   37388 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0318 21:18:50.803003   37388 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I0318 21:18:50.803027   37388 status.go:422] multinode-119391 apiserver status = Running (err=<nil>)
	I0318 21:18:50.803037   37388 status.go:257] multinode-119391 status: &{Name:multinode-119391 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 21:18:50.803052   37388 status.go:255] checking status of multinode-119391-m02 ...
	I0318 21:18:50.803344   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.803380   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.818064   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0318 21:18:50.818446   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.818895   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.818917   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.819211   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.819396   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .GetState
	I0318 21:18:50.820885   37388 status.go:330] multinode-119391-m02 host status = "Running" (err=<nil>)
	I0318 21:18:50.820920   37388 host.go:66] Checking if "multinode-119391-m02" exists ...
	I0318 21:18:50.821198   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.821231   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.835109   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43923
	I0318 21:18:50.835434   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.835853   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.835877   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.836172   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.836342   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .GetIP
	I0318 21:18:50.838982   37388 main.go:141] libmachine: (multinode-119391-m02) DBG | domain multinode-119391-m02 has defined MAC address 52:54:00:9e:7f:c4 in network mk-multinode-119391
	I0318 21:18:50.839338   37388 main.go:141] libmachine: (multinode-119391-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:7f:c4", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:17:25 +0000 UTC Type:0 Mac:52:54:00:9e:7f:c4 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-119391-m02 Clientid:01:52:54:00:9e:7f:c4}
	I0318 21:18:50.839370   37388 main.go:141] libmachine: (multinode-119391-m02) DBG | domain multinode-119391-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:7f:c4 in network mk-multinode-119391
	I0318 21:18:50.839496   37388 host.go:66] Checking if "multinode-119391-m02" exists ...
	I0318 21:18:50.839787   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.839818   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.853633   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38129
	I0318 21:18:50.854041   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.854473   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.854488   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.854806   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.854975   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .DriverName
	I0318 21:18:50.855128   37388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 21:18:50.855182   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .GetSSHHostname
	I0318 21:18:50.857898   37388 main.go:141] libmachine: (multinode-119391-m02) DBG | domain multinode-119391-m02 has defined MAC address 52:54:00:9e:7f:c4 in network mk-multinode-119391
	I0318 21:18:50.858284   37388 main.go:141] libmachine: (multinode-119391-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:7f:c4", ip: ""} in network mk-multinode-119391: {Iface:virbr1 ExpiryTime:2024-03-18 22:17:25 +0000 UTC Type:0 Mac:52:54:00:9e:7f:c4 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-119391-m02 Clientid:01:52:54:00:9e:7f:c4}
	I0318 21:18:50.858310   37388 main.go:141] libmachine: (multinode-119391-m02) DBG | domain multinode-119391-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:9e:7f:c4 in network mk-multinode-119391
	I0318 21:18:50.858416   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .GetSSHPort
	I0318 21:18:50.858576   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .GetSSHKeyPath
	I0318 21:18:50.858698   37388 main.go:141] libmachine: (multinode-119391-m02) Calling .GetSSHUsername
	I0318 21:18:50.858817   37388 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18421-5321/.minikube/machines/multinode-119391-m02/id_rsa Username:docker}
	I0318 21:18:50.940627   37388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 21:18:50.957041   37388 status.go:257] multinode-119391-m02 status: &{Name:multinode-119391-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0318 21:18:50.957071   37388 status.go:255] checking status of multinode-119391-m03 ...
	I0318 21:18:50.957365   37388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 21:18:50.957431   37388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 21:18:50.972053   37388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
	I0318 21:18:50.972504   37388 main.go:141] libmachine: () Calling .GetVersion
	I0318 21:18:50.972934   37388 main.go:141] libmachine: Using API Version  1
	I0318 21:18:50.972955   37388 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 21:18:50.973275   37388 main.go:141] libmachine: () Calling .GetMachineName
	I0318 21:18:50.973474   37388 main.go:141] libmachine: (multinode-119391-m03) Calling .GetState
	I0318 21:18:50.975013   37388 status.go:330] multinode-119391-m03 host status = "Stopped" (err=<nil>)
	I0318 21:18:50.975027   37388 status.go:343] host is not running, skipping remaining checks
	I0318 21:18:50.975035   37388 status.go:257] multinode-119391-m03 status: &{Name:multinode-119391-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-119391 node start m03 -v=7 --alsologtostderr: (29.246103965s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-119391 node delete m03: (1.645521598s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (173.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119391 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119391 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m52.999571378s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119391 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (173.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-119391
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119391-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-119391-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.965974ms)

                                                
                                                
-- stdout --
	* [multinode-119391-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-119391-m02' is duplicated with machine name 'multinode-119391-m02' in profile 'multinode-119391'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119391-m03 --driver=kvm2  --container-runtime=crio
E0318 21:30:14.157429   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:30:23.236364   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119391-m03 --driver=kvm2  --container-runtime=crio: (45.726440839s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-119391
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-119391: exit status 80 (218.293455ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-119391 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-119391-m03 already exists in multinode-119391-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-119391-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.85s)

                                                
                                    
x
+
TestScheduledStopUnix (117.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-193129 --memory=2048 --driver=kvm2  --container-runtime=crio
E0318 21:34:57.204632   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:35:14.158118   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:35:23.236352   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-193129 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.402279007s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-193129 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-193129 -n scheduled-stop-193129
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-193129 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-193129 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-193129 -n scheduled-stop-193129
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-193129
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-193129 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-193129
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-193129: exit status 7 (74.447934ms)

                                                
                                                
-- stdout --
	scheduled-stop-193129
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-193129 -n scheduled-stop-193129
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-193129 -n scheduled-stop-193129: exit status 7 (74.106247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-193129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-193129
--- PASS: TestScheduledStopUnix (117.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.56616730 start -p running-upgrade-857338 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.56616730 start -p running-upgrade-857338 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m47.913120343s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-857338 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-857338 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m28.17476591s)
helpers_test.go:175: Cleaning up "running-upgrade-857338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-857338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-857338: (1.156497761s)
--- PASS: TestRunningBinaryUpgrade (200.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-779999 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-779999 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (101.864886ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-779999] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-779999 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-779999 --driver=kvm2  --container-runtime=crio: (1m38.723465962s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-779999 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-389288 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-389288 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (108.312028ms)

                                                
                                                
-- stdout --
	* [false-389288] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18421
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 21:36:51.892612   43972 out.go:291] Setting OutFile to fd 1 ...
	I0318 21:36:51.892741   43972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:36:51.892751   43972 out.go:304] Setting ErrFile to fd 2...
	I0318 21:36:51.892755   43972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 21:36:51.892970   43972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18421-5321/.minikube/bin
	I0318 21:36:51.893525   43972 out.go:298] Setting JSON to false
	I0318 21:36:51.894430   43972 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4756,"bootTime":1710793056,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 21:36:51.894490   43972 start.go:139] virtualization: kvm guest
	I0318 21:36:51.896527   43972 out.go:177] * [false-389288] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 21:36:51.897919   43972 out.go:177]   - MINIKUBE_LOCATION=18421
	I0318 21:36:51.899241   43972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 21:36:51.897998   43972 notify.go:220] Checking for updates...
	I0318 21:36:51.901435   43972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18421-5321/kubeconfig
	I0318 21:36:51.902507   43972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18421-5321/.minikube
	I0318 21:36:51.903637   43972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 21:36:51.904801   43972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 21:36:51.906312   43972 config.go:182] Loaded profile config "NoKubernetes-779999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:36:51.906455   43972 config.go:182] Loaded profile config "force-systemd-env-800500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:36:51.906583   43972 config.go:182] Loaded profile config "offline-crio-753209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 21:36:51.906677   43972 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 21:36:51.942247   43972 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 21:36:51.943153   43972 start.go:297] selected driver: kvm2
	I0318 21:36:51.943167   43972 start.go:901] validating driver "kvm2" against <nil>
	I0318 21:36:51.943180   43972 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 21:36:51.945058   43972 out.go:177] 
	W0318 21:36:51.946139   43972 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0318 21:36:51.947255   43972 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-389288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-389288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-389288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-389288"

                                                
                                                
----------------------- debugLogs end: false-389288 [took: 3.480466182s] --------------------------------
helpers_test.go:175: Cleaning up "false-389288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-389288
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-779999 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-779999 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.094925571s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-779999 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-779999 status -o json: exit status 2 (250.550904ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-779999","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-779999
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-779999: (1.062066371s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-779999 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-779999 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.604644133s)
--- PASS: TestNoKubernetes/serial/Start (49.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-779999 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-779999 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.576364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-779999
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-779999: (1.41356811s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (69.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-779999 --driver=kvm2  --container-runtime=crio
E0318 21:40:06.282581   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 21:40:14.157724   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:40:23.236331   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-779999 --driver=kvm2  --container-runtime=crio: (1m9.9731247s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (69.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-779999 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-779999 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.388223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1232694564 start -p stopped-upgrade-133496 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1232694564 start -p stopped-upgrade-133496 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.744918744s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1232694564 -p stopped-upgrade-133496 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1232694564 -p stopped-upgrade-133496 stop: (2.146759438s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-133496 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-133496 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.033074235s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.93s)

                                                
                                    
x
+
TestPause/serial/Start (119.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-578700 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-578700 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m59.945893034s)
--- PASS: TestPause/serial/Start (119.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-133496
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m11.702678776s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2x67j" [6fa169db-b5cc-42e1-9f18-92a4bab53b9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2x67j" [6fa169db-b5cc-42e1-9f18-92a4bab53b9e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004981566s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-578700 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-578700 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.743301056s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (25.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-389288 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-389288 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.196161533s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-389288 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-389288 exec deployment/netcat -- nslookup kubernetes.default: (10.177515428s)
--- PASS: TestNetworkPlugins/group/auto/DNS (25.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.167474026s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/Pause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-578700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-578700 --alsologtostderr -v=5: (1.089728778s)
--- PASS: TestPause/serial/Pause (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-578700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-578700 --output=json --layout=cluster: exit status 2 (363.018337ms)

                                                
                                                
-- stdout --
	{"Name":"pause-578700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-578700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-578700 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.94s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.23s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-578700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-578700 --alsologtostderr -v=5: (1.22498372s)
--- PASS: TestPause/serial/PauseAgain (1.23s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.58s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-578700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-578700 --alsologtostderr -v=5: (1.576551571s)
--- PASS: TestPause/serial/DeletePaused (1.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m48.82912031s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.353518446s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (125.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0318 21:45:14.158018   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:45:23.236322   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m5.337576961s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (125.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rh8rm" [bb1bdefa-2792-484c-adb8-35027d5d63c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005400845s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p4xzw" [cec408d8-b730-41dd-82e1-0d239a573089] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p4xzw" [cec408d8-b730-41dd-82e1-0d239a573089] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006094117s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-389288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (115.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m55.356121564s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (115.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8png9" [f30b8943-f3f4-4606-b66a-e7ea6506e09e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007088625s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.985838071s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-389288 replace --force -f testdata/netcat-deployment.yaml: (1.099381802s)
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qkpkw" [bb15e057-e136-4753-9a73-f72fa1cf8a4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qkpkw" [bb15e057-e136-4753-9a73-f72fa1cf8a4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00467078s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-389288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9jthf" [35b9c0b0-320d-40d5-b76f-646c843046cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9jthf" [35b9c0b0-320d-40d5-b76f-646c843046cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006857533s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-389288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-389288 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m42.568398027s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hgwnh" [5727edc1-69f0-4b82-bed3-c51394832e03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hgwnh" [5727edc1-69f0-4b82-bed3-c51394832e03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005564853s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-389288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nrzwp" [c7d15915-c625-4955-8b63-756908e14f9d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006460211s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context flannel-389288 replace --force -f testdata/netcat-deployment.yaml: (1.566432275s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d2brl" [b23e13f3-8f5d-4dcc-9b8c-22de29198c9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d2brl" [b23e13f3-8f5d-4dcc-9b8c-22de29198c9e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004857365s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (194.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-963041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-963041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (3m14.386875359s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (194.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-389288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (110.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-141758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 21:49:07.940604   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:07.945889   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:07.956168   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:07.977191   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:08.017478   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:08.097791   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:08.258444   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:08.579348   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:09.219753   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:10.500026   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:49:13.061128   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-141758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m50.520055859s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (110.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-389288 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-389288 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2967z" [3e69c364-77de-4460-a272-f992d264f329] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0318 21:49:18.181711   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2967z" [3e69c364-77de-4460-a272-f992d264f329] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005079388s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-389288 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-389288 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-660775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 21:49:48.902593   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:50:14.157567   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:50:23.236626   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 21:50:29.863253   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:50:38.568390   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:38.573649   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:38.583878   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:38.604123   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:38.644397   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:38.725214   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:38.885609   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:39.206099   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:39.846533   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:41.127574   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:50:43.688403   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-660775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m3.656005883s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [70ddc499-8b00-4282-9598-9146b092994e] Pending
helpers_test.go:344: "busybox" [70ddc499-8b00-4282-9598-9146b092994e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0318 21:50:48.809349   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
helpers_test.go:344: "busybox" [70ddc499-8b00-4282-9598-9146b092994e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004603759s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-141758 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [428e6333-33ef-4431-b4f8-cd45b22c9654] Pending
helpers_test.go:344: "busybox" [428e6333-33ef-4431-b4f8-cd45b22c9654] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [428e6333-33ef-4431-b4f8-cd45b22c9654] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.008334071s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-141758 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-660775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-660775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097273042s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-660775 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-141758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0318 21:50:59.049532   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-141758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051123603s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-141758 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-963041 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0318 21:51:52.886887   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3f9c8026-8490-4959-a7d6-fc5d82c4af3b] Pending
helpers_test.go:344: "busybox" [3f9c8026-8490-4959-a7d6-fc5d82c4af3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0318 21:51:54.167884   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:51:56.728343   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3f9c8026-8490-4959-a7d6-fc5d82c4af3b] Running
E0318 21:52:00.491187   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:52:01.849108   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004132452s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-963041 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-963041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-963041 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (683.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-660775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-660775 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m23.20633037s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-660775 -n default-k8s-diff-port-660775
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (683.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (625.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-141758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 21:53:32.074389   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:35.392071   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:53:42.315427   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:53:49.268112   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:54:02.795758   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-141758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m25.270015341s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-141758 -n embed-certs-141758
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (625.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-648232 --alsologtostderr -v=3
E0318 21:54:25.280827   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-648232 --alsologtostderr -v=3: (4.466532278s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-648232 -n old-k8s-version-648232: exit status 7 (69.819121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-648232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (567.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-963041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0318 21:54:43.756823   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:54:56.002701   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:54:57.313081   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:55:14.157506   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 21:55:23.236224   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 21:55:36.964026   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:55:38.567540   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:55:52.149395   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:56:05.677264   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:56:06.252819   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 21:56:46.282798   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 21:56:51.607619   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:56:58.885017   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:57:13.470525   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:57:19.292596   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 21:57:41.153845   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 21:58:08.306777   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:58:21.833572   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:58:35.989580   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 21:58:49.518359   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
E0318 21:59:07.940492   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/auto-389288/client.crt: no such file or directory
E0318 21:59:15.040947   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 21:59:42.726065   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/bridge-389288/client.crt: no such file or directory
E0318 22:00:14.158136   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/addons-791443/client.crt: no such file or directory
E0318 22:00:23.237050   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/functional-882018/client.crt: no such file or directory
E0318 22:00:38.567694   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/kindnet-389288/client.crt: no such file or directory
E0318 22:01:51.607648   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/calico-389288/client.crt: no such file or directory
E0318 22:02:13.471084   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/custom-flannel-389288/client.crt: no such file or directory
E0318 22:03:08.305808   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 22:03:21.833103   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-963041 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m27.104684074s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963041 -n no-preload-963041
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (567.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-962491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0318 22:18:08.306262   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/enable-default-cni-389288/client.crt: no such file or directory
E0318 22:18:21.833508   12568 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18421-5321/.minikube/profiles/flannel-389288/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-962491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (58.727779459s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-962491 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-962491 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.597804803s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-962491 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-962491 --alsologtostderr -v=3: (11.674549526s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-962491 -n newest-cni-962491
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-962491 -n newest-cni-962491: exit status 7 (74.83514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-962491 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-962491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-962491 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (40.196080244s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-962491 -n newest-cni-962491
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-962491 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-962491 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-962491 -n newest-cni-962491
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-962491 -n newest-cni-962491: exit status 2 (244.158022ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-962491 -n newest-cni-962491
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-962491 -n newest-cni-962491: exit status 2 (243.979943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-962491 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-962491 -n newest-cni-962491
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-962491 -n newest-cni-962491
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.53s)

                                                
                                    

Test skip (39/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3.15
271 TestNetworkPlugins/group/cilium 3.48
286 TestStartStop/group/disable-driver-mounts 0.69
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-389288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-389288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-389288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-389288"

                                                
                                                
----------------------- debugLogs end: kubenet-389288 [took: 3.015797123s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-389288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-389288
--- SKIP: TestNetworkPlugins/group/kubenet (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-389288 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-389288" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-389288

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-389288" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389288"

                                                
                                                
----------------------- debugLogs end: cilium-389288 [took: 3.334750551s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-389288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-389288
--- SKIP: TestNetworkPlugins/group/cilium (3.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-369155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-369155
--- SKIP: TestStartStop/group/disable-driver-mounts (0.69s)

                                                
                                    
Copied to clipboard